Dec 10 12:15:31 crc systemd[1]: Starting Kubernetes Kubelet... Dec 10 12:15:31 crc restorecon[4678]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:31 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 12:15:32 crc restorecon[4678]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 12:15:32 crc restorecon[4678]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 10 12:15:32 crc kubenswrapper[4689]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 10 12:15:32 crc kubenswrapper[4689]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 10 12:15:32 crc kubenswrapper[4689]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 10 12:15:32 crc kubenswrapper[4689]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 10 12:15:32 crc kubenswrapper[4689]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 10 12:15:32 crc kubenswrapper[4689]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.338648 4689 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341351 4689 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341372 4689 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341377 4689 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341382 4689 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341387 4689 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341394 4689 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341400 4689 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341404 4689 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341407 4689 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341411 4689 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341415 4689 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341419 4689 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341423 4689 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341432 4689 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341435 4689 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341439 4689 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341443 4689 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341448 4689 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341453 4689 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341458 4689 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341462 4689 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341466 4689 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341470 4689 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341474 4689 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341477 4689 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341481 4689 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341484 4689 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341488 4689 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341492 4689 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341495 4689 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341500 4689 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341505 4689 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341509 4689 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341513 4689 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341517 4689 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341521 4689 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341526 4689 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341532 4689 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341537 4689 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341542 4689 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341547 4689 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341552 4689 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341557 4689 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341561 4689 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341564 4689 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341568 4689 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341572 4689 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341576 4689 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341580 4689 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341584 4689 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341588 4689 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341591 4689 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341596 4689 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341600 4689 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341606 4689 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341610 4689 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341616 4689 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341622 4689 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341626 4689 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341630 4689 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341635 4689 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341639 4689 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341643 4689 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341649 4689 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341654 4689 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341658 4689 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341663 4689 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341668 4689 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341672 4689 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341677 4689 feature_gate.go:330] unrecognized feature gate: Example Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.341681 4689 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.341986 4689 flags.go:64] FLAG: --address="0.0.0.0" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.341999 4689 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342011 4689 flags.go:64] FLAG: --anonymous-auth="true" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342017 4689 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342022 4689 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342026 4689 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342032 4689 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342038 4689 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342043 4689 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342047 4689 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342051 4689 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342056 4689 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342060 4689 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342065 4689 flags.go:64] FLAG: --cgroup-root="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342069 4689 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342073 4689 flags.go:64] FLAG: --client-ca-file="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342077 4689 flags.go:64] FLAG: --cloud-config="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342082 4689 flags.go:64] FLAG: --cloud-provider="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342086 4689 flags.go:64] FLAG: --cluster-dns="[]" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342091 4689 flags.go:64] FLAG: --cluster-domain="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342095 4689 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342100 4689 flags.go:64] FLAG: --config-dir="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342104 4689 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342108 4689 flags.go:64] FLAG: --container-log-max-files="5" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342113 4689 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342117 4689 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342122 4689 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342127 4689 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342134 4689 flags.go:64] FLAG: --contention-profiling="false" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342138 4689 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342142 4689 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342147 4689 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342151 4689 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342157 4689 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342165 4689 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342170 4689 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342174 4689 flags.go:64] FLAG: --enable-load-reader="false" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342179 4689 flags.go:64] FLAG: --enable-server="true" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342185 4689 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342191 4689 flags.go:64] FLAG: --event-burst="100" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342196 4689 flags.go:64] FLAG: --event-qps="50" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342200 4689 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342204 4689 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342208 4689 flags.go:64] FLAG: --eviction-hard="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342213 4689 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342217 4689 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342221 4689 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342227 4689 flags.go:64] FLAG: --eviction-soft="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342231 4689 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342235 4689 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342239 4689 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342243 4689 flags.go:64] FLAG: --experimental-mounter-path="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342247 4689 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342251 4689 flags.go:64] FLAG: --fail-swap-on="true" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342255 4689 flags.go:64] FLAG: --feature-gates="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342261 4689 flags.go:64] FLAG: --file-check-frequency="20s" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342265 4689 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342269 4689 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342274 4689 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342278 4689 flags.go:64] FLAG: --healthz-port="10248" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342282 4689 flags.go:64] FLAG: --help="false" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342286 4689 flags.go:64] FLAG: --hostname-override="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342290 4689 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342295 4689 flags.go:64] FLAG: --http-check-frequency="20s" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342299 4689 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342304 4689 flags.go:64] FLAG: --image-credential-provider-config="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342308 4689 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342312 4689 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342316 4689 flags.go:64] FLAG: --image-service-endpoint="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342320 4689 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342324 4689 flags.go:64] FLAG: --kube-api-burst="100" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342328 4689 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342332 4689 flags.go:64] FLAG: --kube-api-qps="50" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342336 4689 flags.go:64] FLAG: --kube-reserved="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342340 4689 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342344 4689 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342348 4689 flags.go:64] FLAG: --kubelet-cgroups="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342352 4689 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342357 4689 flags.go:64] FLAG: --lock-file="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342361 4689 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342365 4689 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342369 4689 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342375 4689 flags.go:64] FLAG: --log-json-split-stream="false" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342379 4689 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342383 4689 flags.go:64] FLAG: --log-text-split-stream="false" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342387 4689 flags.go:64] FLAG: --logging-format="text" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342391 4689 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342396 4689 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342400 4689 flags.go:64] FLAG: --manifest-url="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342404 4689 flags.go:64] FLAG: --manifest-url-header="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342410 4689 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342414 4689 flags.go:64] FLAG: --max-open-files="1000000" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342419 4689 flags.go:64] FLAG: --max-pods="110" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342424 4689 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342429 4689 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342434 4689 flags.go:64] FLAG: --memory-manager-policy="None" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342439 4689 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342443 4689 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342448 4689 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342453 4689 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342464 4689 flags.go:64] FLAG: --node-status-max-images="50" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342469 4689 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342473 4689 flags.go:64] FLAG: --oom-score-adj="-999" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342477 4689 flags.go:64] FLAG: --pod-cidr="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342481 4689 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342488 4689 flags.go:64] FLAG: --pod-manifest-path="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342492 4689 flags.go:64] FLAG: --pod-max-pids="-1" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342496 4689 flags.go:64] FLAG: --pods-per-core="0" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342500 4689 flags.go:64] FLAG: --port="10250" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342504 4689 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342508 4689 flags.go:64] FLAG: --provider-id="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342513 4689 flags.go:64] FLAG: --qos-reserved="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342517 4689 flags.go:64] FLAG: --read-only-port="10255" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342521 4689 flags.go:64] FLAG: --register-node="true" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342525 4689 flags.go:64] FLAG: --register-schedulable="true" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342529 4689 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342536 4689 flags.go:64] FLAG: --registry-burst="10" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342540 4689 flags.go:64] FLAG: --registry-qps="5" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342546 4689 flags.go:64] FLAG: --reserved-cpus="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342550 4689 flags.go:64] FLAG: --reserved-memory="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342555 4689 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342561 4689 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342566 4689 flags.go:64] FLAG: --rotate-certificates="false" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342570 4689 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342575 4689 flags.go:64] FLAG: --runonce="false" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342579 4689 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342584 4689 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342588 4689 flags.go:64] FLAG: --seccomp-default="false" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342592 4689 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342596 4689 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342600 4689 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342604 4689 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342609 4689 flags.go:64] FLAG: --storage-driver-password="root" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342614 4689 flags.go:64] FLAG: --storage-driver-secure="false" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342618 4689 flags.go:64] FLAG: --storage-driver-table="stats" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342622 4689 flags.go:64] FLAG: --storage-driver-user="root" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342627 4689 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342632 4689 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342636 4689 flags.go:64] FLAG: --system-cgroups="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342640 4689 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342646 4689 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342650 4689 flags.go:64] FLAG: --tls-cert-file="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342654 4689 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342660 4689 flags.go:64] FLAG: --tls-min-version="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342663 4689 flags.go:64] FLAG: --tls-private-key-file="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342667 4689 flags.go:64] FLAG: --topology-manager-policy="none" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342671 4689 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342675 4689 flags.go:64] FLAG: --topology-manager-scope="container" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342679 4689 flags.go:64] FLAG: --v="2" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342684 4689 flags.go:64] FLAG: --version="false" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342690 4689 flags.go:64] FLAG: --vmodule="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342695 4689 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.342699 4689 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342814 4689 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342822 4689 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342827 4689 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342832 4689 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342837 4689 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342842 4689 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342847 4689 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342852 4689 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342857 4689 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342862 4689 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342868 4689 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342874 4689 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342878 4689 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342883 4689 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342887 4689 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342891 4689 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342894 4689 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342898 4689 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342902 4689 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342906 4689 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342910 4689 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342914 4689 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342917 4689 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342920 4689 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342924 4689 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342928 4689 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342933 4689 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342937 4689 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342941 4689 feature_gate.go:330] unrecognized feature gate: Example Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342945 4689 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342949 4689 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342955 4689 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342959 4689 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342963 4689 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342971 4689 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342990 4689 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342994 4689 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.342999 4689 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343004 4689 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343008 4689 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343012 4689 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343016 4689 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343020 4689 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343023 4689 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343027 4689 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343031 4689 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343035 4689 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343040 4689 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343044 4689 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343048 4689 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343052 4689 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343055 4689 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343059 4689 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343063 4689 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343066 4689 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343071 4689 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343074 4689 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343078 4689 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343082 4689 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343085 4689 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343089 4689 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343092 4689 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343097 4689 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343102 4689 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343106 4689 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343110 4689 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343114 4689 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343117 4689 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343121 4689 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343125 4689 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.343128 4689 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.343144 4689 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.355042 4689 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.355098 4689 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355202 4689 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355212 4689 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355217 4689 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355223 4689 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355229 4689 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355235 4689 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355245 4689 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355257 4689 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355263 4689 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355270 4689 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355275 4689 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355280 4689 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355286 4689 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355292 4689 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355297 4689 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355301 4689 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355306 4689 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355311 4689 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355316 4689 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355321 4689 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355326 4689 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355330 4689 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355335 4689 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355340 4689 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355347 4689 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355354 4689 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355360 4689 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355366 4689 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355371 4689 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355377 4689 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355383 4689 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355390 4689 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355397 4689 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355403 4689 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355411 4689 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355417 4689 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355424 4689 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355429 4689 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355435 4689 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355442 4689 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355449 4689 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355454 4689 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355460 4689 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355465 4689 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355470 4689 feature_gate.go:330] unrecognized feature gate: Example Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355476 4689 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355481 4689 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355486 4689 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355491 4689 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355496 4689 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355502 4689 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355507 4689 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355512 4689 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355516 4689 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355521 4689 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355526 4689 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355532 4689 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355538 4689 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355575 4689 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355580 4689 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355586 4689 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355591 4689 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355596 4689 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355601 4689 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355605 4689 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355610 4689 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355615 4689 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355620 4689 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355625 4689 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355631 4689 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355637 4689 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.355648 4689 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355811 4689 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355820 4689 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355826 4689 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355833 4689 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355840 4689 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355846 4689 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355851 4689 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355857 4689 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355863 4689 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355868 4689 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355874 4689 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355881 4689 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355886 4689 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355891 4689 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355895 4689 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355900 4689 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355905 4689 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355909 4689 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355915 4689 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355922 4689 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355927 4689 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355932 4689 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355937 4689 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355943 4689 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355947 4689 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355952 4689 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355957 4689 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355962 4689 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355985 4689 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355990 4689 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.355995 4689 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356000 4689 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356005 4689 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356009 4689 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356014 4689 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356019 4689 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356024 4689 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356029 4689 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356034 4689 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356039 4689 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356044 4689 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356049 4689 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356054 4689 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356060 4689 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356065 4689 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356070 4689 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356077 4689 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356083 4689 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356089 4689 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356094 4689 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356101 4689 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356106 4689 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356112 4689 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356117 4689 feature_gate.go:330] unrecognized feature gate: Example Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356122 4689 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356126 4689 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356131 4689 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356137 4689 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356142 4689 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356148 4689 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356155 4689 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356161 4689 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356167 4689 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356172 4689 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356178 4689 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356183 4689 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356190 4689 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356196 4689 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356201 4689 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356206 4689 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.356211 4689 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.356220 4689 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.356474 4689 server.go:940] "Client rotation is on, will bootstrap in background" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.360042 4689 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.360159 4689 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.360833 4689 server.go:997] "Starting client certificate rotation" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.360869 4689 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.361136 4689 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-22 15:55:51.543058796 +0000 UTC Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.361233 4689 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.368673 4689 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 10 12:15:32 crc kubenswrapper[4689]: E1210 12:15:32.370748 4689 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.371021 4689 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.379765 4689 log.go:25] "Validated CRI v1 runtime API" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.394731 4689 log.go:25] "Validated CRI v1 image API" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.399100 4689 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.401775 4689 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-10-12-11-38-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.401820 4689 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.424287 4689 manager.go:217] Machine: {Timestamp:2025-12-10 12:15:32.421604388 +0000 UTC m=+0.209685576 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:41a6821f-a04f-4a5d-a8e5-790b0745d6fd BootID:488f3058-98a0-4e39-b016-529d0c992401 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:29:0d:9a Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:29:0d:9a Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:dd:7f:1e Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:19:8a:7a Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:18:cc:6a Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:88:2b:35 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:96:3e:06:d1:aa:8b Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:7a:08:63:49:c2:01 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.424682 4689 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.424917 4689 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.425805 4689 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.426100 4689 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.426166 4689 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.426520 4689 topology_manager.go:138] "Creating topology manager with none policy" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.426539 4689 container_manager_linux.go:303] "Creating device plugin manager" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.426815 4689 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.426873 4689 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.427399 4689 state_mem.go:36] "Initialized new in-memory state store" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.427535 4689 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.428646 4689 kubelet.go:418] "Attempting to sync node with API server" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.428686 4689 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.428732 4689 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.428756 4689 kubelet.go:324] "Adding apiserver pod source" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.428774 4689 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.430887 4689 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Dec 10 12:15:32 crc kubenswrapper[4689]: E1210 12:15:32.431199 4689 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.431387 4689 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.430915 4689 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Dec 10 12:15:32 crc kubenswrapper[4689]: E1210 12:15:32.431580 4689 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.431948 4689 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.433057 4689 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.433716 4689 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.433749 4689 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.433761 4689 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.433772 4689 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.433787 4689 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.433797 4689 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.433807 4689 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.433824 4689 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.433840 4689 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.433851 4689 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.433867 4689 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.433879 4689 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.434076 4689 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.434726 4689 server.go:1280] "Started kubelet" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.435115 4689 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.435230 4689 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.435128 4689 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.436074 4689 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 10 12:15:32 crc systemd[1]: Started Kubernetes Kubelet. Dec 10 12:15:32 crc kubenswrapper[4689]: E1210 12:15:32.436854 4689 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.163:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187fd9b0d347aff9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 12:15:32.434685945 +0000 UTC m=+0.222767083,LastTimestamp:2025-12-10 12:15:32.434685945 +0000 UTC m=+0.222767083,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.437765 4689 server.go:460] "Adding debug handlers to kubelet server" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.437908 4689 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.437950 4689 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.438246 4689 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.438278 4689 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.438290 4689 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 05:12:28.722413537 +0000 UTC Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.438422 4689 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 10 12:15:32 crc kubenswrapper[4689]: E1210 12:15:32.438425 4689 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.440830 4689 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Dec 10 12:15:32 crc kubenswrapper[4689]: E1210 12:15:32.440926 4689 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Dec 10 12:15:32 crc kubenswrapper[4689]: E1210 12:15:32.440944 4689 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="200ms" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.443348 4689 factory.go:55] Registering systemd factory Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.443378 4689 factory.go:221] Registration of the systemd container factory successfully Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.445105 4689 factory.go:153] Registering CRI-O factory Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.445135 4689 factory.go:221] Registration of the crio container factory successfully Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.447210 4689 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.447259 4689 factory.go:103] Registering Raw factory Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.447287 4689 manager.go:1196] Started watching for new ooms in manager Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.448580 4689 manager.go:319] Starting recovery of all containers Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451257 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451311 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451328 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451346 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451364 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451379 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451398 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451412 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451431 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451445 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451460 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451477 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451492 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451510 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451528 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451542 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451557 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451573 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451588 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451603 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451616 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451638 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451654 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451668 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451682 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451719 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451762 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451827 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451848 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451894 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451911 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451927 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451941 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451956 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.451991 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452008 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452025 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452049 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452063 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452077 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452091 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452108 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452128 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452143 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452160 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452175 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452189 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452203 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452216 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452231 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452245 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452260 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452280 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452296 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452312 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452329 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452345 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452360 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452377 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452391 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452415 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452429 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452445 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452462 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452475 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452489 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452504 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452518 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452533 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452550 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452565 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452579 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452593 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452610 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452625 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452641 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452655 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.452670 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453409 4689 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453447 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453464 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453479 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453494 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453509 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453523 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453540 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453553 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453566 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453580 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453597 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453612 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453626 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453645 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453659 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453671 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453685 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453698 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453710 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453723 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453738 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453754 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453766 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453781 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453795 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453809 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453834 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453850 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453865 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453881 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453898 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453912 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453926 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453940 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453955 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.453993 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454009 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454024 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454037 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454053 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454067 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454084 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454100 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454114 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454127 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454144 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454159 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454173 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454187 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454201 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454215 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454229 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454245 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454258 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454274 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454288 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454303 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454318 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454332 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454348 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454362 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454379 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454402 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454417 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454432 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454446 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454461 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454476 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454493 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454507 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454522 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454535 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454549 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454563 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454578 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454591 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454605 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454652 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454668 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454685 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454702 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454716 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454730 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454744 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454757 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454772 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454787 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454800 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454818 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454832 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454848 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454866 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454881 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454897 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454913 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454929 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454944 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454957 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.454990 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.455007 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.455021 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.455035 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.455050 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.455064 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.455077 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.455092 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.455111 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.455129 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.455145 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.455162 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.455176 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.455194 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.455209 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.455224 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.455239 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.455254 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.455270 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.455287 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.455301 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.455317 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.455331 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.455345 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.455360 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.455373 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.455389 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.455404 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.455418 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.455431 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.455445 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.455457 4689 reconstruct.go:97] "Volume reconstruction finished" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.455466 4689 reconciler.go:26] "Reconciler: start to sync state" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.469767 4689 manager.go:324] Recovery completed Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.489101 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.491748 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.491800 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.491811 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.492655 4689 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.492669 4689 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.492689 4689 state_mem.go:36] "Initialized new in-memory state store" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.495377 4689 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.496836 4689 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.496875 4689 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.496898 4689 kubelet.go:2335] "Starting kubelet main sync loop" Dec 10 12:15:32 crc kubenswrapper[4689]: E1210 12:15:32.496947 4689 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 10 12:15:32 crc kubenswrapper[4689]: W1210 12:15:32.498461 4689 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Dec 10 12:15:32 crc kubenswrapper[4689]: E1210 12:15:32.498541 4689 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.502236 4689 policy_none.go:49] "None policy: Start" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.503385 4689 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.503418 4689 state_mem.go:35] "Initializing new in-memory state store" Dec 10 12:15:32 crc kubenswrapper[4689]: E1210 12:15:32.539299 4689 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.545901 4689 manager.go:334] "Starting Device Plugin manager" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.545949 4689 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.545962 4689 server.go:79] "Starting device plugin registration server" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.546379 4689 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.546402 4689 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.549418 4689 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.549528 4689 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.549540 4689 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 10 12:15:32 crc kubenswrapper[4689]: E1210 12:15:32.555002 4689 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.597459 4689 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.597570 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.599350 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.599400 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.599418 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.599578 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.599875 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.599952 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.600655 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.600688 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.600699 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.600818 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.601019 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.601074 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.601192 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.601232 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.601249 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.602075 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.602123 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.602142 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.602332 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.602509 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.602551 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.603507 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.603531 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.603543 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.603555 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.603555 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.603593 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.603625 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.603639 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.603655 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.603764 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.603799 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.603860 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.605044 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.605117 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.605133 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.605535 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.605568 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.605579 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.605748 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.605781 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.606748 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.606779 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.606791 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:32 crc kubenswrapper[4689]: E1210 12:15:32.642475 4689 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="400ms" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.648557 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.650076 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.650107 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.650116 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.650136 4689 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 10 12:15:32 crc kubenswrapper[4689]: E1210 12:15:32.650382 4689 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.656820 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.656854 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.656890 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.656956 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.657035 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.657075 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.657107 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.657140 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.657176 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.657205 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.657230 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.657267 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.657309 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.657346 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.657374 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.758873 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.758911 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.758931 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.758947 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.759012 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.759046 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.759079 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.759110 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.759198 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.759203 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.759163 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.759267 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.759285 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.759139 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.759310 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.759341 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.759398 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.759432 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.759451 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.759497 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.759533 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.759541 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.759497 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.759570 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.759603 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.759610 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.759638 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.759665 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.759718 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.759902 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.850783 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.852426 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.852554 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.852637 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.852736 4689 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 10 12:15:32 crc kubenswrapper[4689]: E1210 12:15:32.853399 4689 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.961786 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.974891 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 10 12:15:32 crc kubenswrapper[4689]: I1210 12:15:32.994747 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:15:33 crc kubenswrapper[4689]: W1210 12:15:32.999896 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-c96b70344ddf49c272bf94cc25c854d79f1ca3cefb2dfa49ff25ed7b7fffd14a WatchSource:0}: Error finding container c96b70344ddf49c272bf94cc25c854d79f1ca3cefb2dfa49ff25ed7b7fffd14a: Status 404 returned error can't find the container with id c96b70344ddf49c272bf94cc25c854d79f1ca3cefb2dfa49ff25ed7b7fffd14a Dec 10 12:15:33 crc kubenswrapper[4689]: W1210 12:15:33.002183 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-efe3e0cb8e6c18720987fc5174f47e48f8449f66779dc95165f7cd1abf7765d2 WatchSource:0}: Error finding container efe3e0cb8e6c18720987fc5174f47e48f8449f66779dc95165f7cd1abf7765d2: Status 404 returned error can't find the container with id efe3e0cb8e6c18720987fc5174f47e48f8449f66779dc95165f7cd1abf7765d2 Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.014339 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 12:15:33 crc kubenswrapper[4689]: W1210 12:15:33.016355 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-dc5c7f564031f0c5b3801730dc4020b662fa9ac027dfe19f4b580b9283fbb06a WatchSource:0}: Error finding container dc5c7f564031f0c5b3801730dc4020b662fa9ac027dfe19f4b580b9283fbb06a: Status 404 returned error can't find the container with id dc5c7f564031f0c5b3801730dc4020b662fa9ac027dfe19f4b580b9283fbb06a Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.019722 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 12:15:33 crc kubenswrapper[4689]: E1210 12:15:33.043624 4689 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="800ms" Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.254110 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.255379 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.255442 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.255458 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.255488 4689 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 10 12:15:33 crc kubenswrapper[4689]: E1210 12:15:33.256000 4689 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Dec 10 12:15:33 crc kubenswrapper[4689]: W1210 12:15:33.266611 4689 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Dec 10 12:15:33 crc kubenswrapper[4689]: E1210 12:15:33.266683 4689 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Dec 10 12:15:33 crc kubenswrapper[4689]: W1210 12:15:33.346492 4689 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Dec 10 12:15:33 crc kubenswrapper[4689]: E1210 12:15:33.346589 4689 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.436864 4689 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.438904 4689 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 07:20:05.570746719 +0000 UTC Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.502150 4689 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="145b8baafed22a37ad3fbb0e4ab523fec92c697af35e9b7b35f0c18055f2adee" exitCode=0 Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.502294 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"145b8baafed22a37ad3fbb0e4ab523fec92c697af35e9b7b35f0c18055f2adee"} Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.502428 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"efe3e0cb8e6c18720987fc5174f47e48f8449f66779dc95165f7cd1abf7765d2"} Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.502523 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.503376 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470"} Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.503427 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9cac484979c05cf4a42859902668ebf540a4fbaa119e12b5acb450f775ce8d92"} Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.503895 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.503930 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.503944 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.504946 4689 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="b2e1ebfdbacde06d36ebe61a04c2977a980def50e80715de5f8a9a292e661915" exitCode=0 Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.505038 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"b2e1ebfdbacde06d36ebe61a04c2977a980def50e80715de5f8a9a292e661915"} Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.506499 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"955b6821c6718443e4cb0bb06de1a4cfa7ae9b9b1acb5d56bcb954dcaa43f831"} Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.506698 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.507553 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.507587 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.507607 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.511619 4689 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e" exitCode=0 Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.511656 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e"} Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.511708 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dc5c7f564031f0c5b3801730dc4020b662fa9ac027dfe19f4b580b9283fbb06a"} Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.511825 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.512729 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.512768 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.512784 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.513433 4689 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="957169fffc75053add7ca840380cc293224c220677d033d0258b32ff804b2f34" exitCode=0 Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.513468 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"957169fffc75053add7ca840380cc293224c220677d033d0258b32ff804b2f34"} Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.513494 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c96b70344ddf49c272bf94cc25c854d79f1ca3cefb2dfa49ff25ed7b7fffd14a"} Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.513602 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.514505 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.514550 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.514567 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.515756 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.516782 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.516816 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:33 crc kubenswrapper[4689]: I1210 12:15:33.516828 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:33 crc kubenswrapper[4689]: W1210 12:15:33.794963 4689 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Dec 10 12:15:33 crc kubenswrapper[4689]: E1210 12:15:33.795104 4689 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Dec 10 12:15:33 crc kubenswrapper[4689]: E1210 12:15:33.844371 4689 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="1.6s" Dec 10 12:15:33 crc kubenswrapper[4689]: W1210 12:15:33.994143 4689 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Dec 10 12:15:33 crc kubenswrapper[4689]: E1210 12:15:33.994240 4689 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.056754 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.057804 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.057858 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.057868 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.057893 4689 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 10 12:15:34 crc kubenswrapper[4689]: E1210 12:15:34.058375 4689 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.439797 4689 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 08:48:25.45905937 +0000 UTC Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.440286 4689 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 44h32m51.018777718s for next certificate rotation Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.505381 4689 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.518218 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"66af8611123a29e82941af3a1facb45f06c358ffb277bd037af034ae97364d56"} Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.518273 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"14c34a55854008d99289ec6ff7df4ab7b8c60658569094a88e6740ceb28714d8"} Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.518290 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"be1e74c974ed4454098265113e97e5d5d27b2ddb6ff5d89cb891c1686290f8f9"} Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.518394 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.519187 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.519219 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.519233 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.520193 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439"} Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.520220 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d"} Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.520233 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f"} Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.520246 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351"} Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.521861 4689 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="309db67492e62b2d8ecbe90aac44e04e499ff639848ce985026b007d29cb693d" exitCode=0 Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.521946 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"309db67492e62b2d8ecbe90aac44e04e499ff639848ce985026b007d29cb693d"} Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.522085 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.522841 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.522864 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.522876 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.528841 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"9ccc2bdf4b011cefbaad9e5fd43836a5009c5c9cc28b9079ba096473056d27a6"} Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.528960 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.530109 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.530161 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.530176 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.531050 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab"} Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.531080 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc"} Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.531095 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991"} Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.531145 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.531899 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.531998 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:34 crc kubenswrapper[4689]: I1210 12:15:34.532069 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:35 crc kubenswrapper[4689]: I1210 12:15:35.534118 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:35 crc kubenswrapper[4689]: I1210 12:15:35.536201 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:35 crc kubenswrapper[4689]: I1210 12:15:35.536253 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:35 crc kubenswrapper[4689]: I1210 12:15:35.536272 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:35 crc kubenswrapper[4689]: I1210 12:15:35.658821 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:35 crc kubenswrapper[4689]: I1210 12:15:35.660101 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:35 crc kubenswrapper[4689]: I1210 12:15:35.660129 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:35 crc kubenswrapper[4689]: I1210 12:15:35.660138 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:35 crc kubenswrapper[4689]: I1210 12:15:35.660245 4689 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 10 12:15:36 crc kubenswrapper[4689]: I1210 12:15:36.539389 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3bde224c6b8a596efc2c688c418ee0a3f2b4ce4a316b3662c969c7c7371cfadd"} Dec 10 12:15:36 crc kubenswrapper[4689]: I1210 12:15:36.539572 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:36 crc kubenswrapper[4689]: I1210 12:15:36.541473 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:36 crc kubenswrapper[4689]: I1210 12:15:36.541497 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:36 crc kubenswrapper[4689]: I1210 12:15:36.541506 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:36 crc kubenswrapper[4689]: I1210 12:15:36.545547 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3"} Dec 10 12:15:36 crc kubenswrapper[4689]: I1210 12:15:36.545753 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:36 crc kubenswrapper[4689]: I1210 12:15:36.546413 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:36 crc kubenswrapper[4689]: I1210 12:15:36.546492 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:36 crc kubenswrapper[4689]: I1210 12:15:36.546561 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:37 crc kubenswrapper[4689]: I1210 12:15:37.454627 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 12:15:37 crc kubenswrapper[4689]: I1210 12:15:37.454896 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:37 crc kubenswrapper[4689]: I1210 12:15:37.456475 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:37 crc kubenswrapper[4689]: I1210 12:15:37.456527 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:37 crc kubenswrapper[4689]: I1210 12:15:37.456550 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:37 crc kubenswrapper[4689]: I1210 12:15:37.556255 4689 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3bde224c6b8a596efc2c688c418ee0a3f2b4ce4a316b3662c969c7c7371cfadd" exitCode=0 Dec 10 12:15:37 crc kubenswrapper[4689]: I1210 12:15:37.556318 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3bde224c6b8a596efc2c688c418ee0a3f2b4ce4a316b3662c969c7c7371cfadd"} Dec 10 12:15:37 crc kubenswrapper[4689]: I1210 12:15:37.556435 4689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 12:15:37 crc kubenswrapper[4689]: I1210 12:15:37.556474 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:37 crc kubenswrapper[4689]: I1210 12:15:37.556552 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:37 crc kubenswrapper[4689]: I1210 12:15:37.557483 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:37 crc kubenswrapper[4689]: I1210 12:15:37.557541 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:37 crc kubenswrapper[4689]: I1210 12:15:37.557564 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:37 crc kubenswrapper[4689]: I1210 12:15:37.557824 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:37 crc kubenswrapper[4689]: I1210 12:15:37.557874 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:37 crc kubenswrapper[4689]: I1210 12:15:37.557893 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:38 crc kubenswrapper[4689]: I1210 12:15:38.565488 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3e77c23061a2cf3e99839c7524beee3518d6af09dfdfbcbae7db239b9660f2d2"} Dec 10 12:15:38 crc kubenswrapper[4689]: I1210 12:15:38.565653 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"64cf91f269811d4af21dab58983ffc2b8433df0d8563d4da712a3a45567f3420"} Dec 10 12:15:38 crc kubenswrapper[4689]: I1210 12:15:38.565688 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"486ff4760ac785ca89109445103d32e81d3779365402d64b47a78366bc11cef5"} Dec 10 12:15:39 crc kubenswrapper[4689]: I1210 12:15:39.315992 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 12:15:39 crc kubenswrapper[4689]: I1210 12:15:39.316287 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:39 crc kubenswrapper[4689]: I1210 12:15:39.318652 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:39 crc kubenswrapper[4689]: I1210 12:15:39.318746 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:39 crc kubenswrapper[4689]: I1210 12:15:39.318765 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:39 crc kubenswrapper[4689]: I1210 12:15:39.434034 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:15:39 crc kubenswrapper[4689]: I1210 12:15:39.434218 4689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 12:15:39 crc kubenswrapper[4689]: I1210 12:15:39.434271 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:39 crc kubenswrapper[4689]: I1210 12:15:39.435944 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:39 crc kubenswrapper[4689]: I1210 12:15:39.436056 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:39 crc kubenswrapper[4689]: I1210 12:15:39.436085 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:39 crc kubenswrapper[4689]: I1210 12:15:39.574193 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8cd48e6b792d76ae22e5db391fb054697e799bf82b0a46bc6170141ea7e4ba35"} Dec 10 12:15:39 crc kubenswrapper[4689]: I1210 12:15:39.939170 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:15:39 crc kubenswrapper[4689]: I1210 12:15:39.939355 4689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 12:15:39 crc kubenswrapper[4689]: I1210 12:15:39.939396 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:39 crc kubenswrapper[4689]: I1210 12:15:39.940871 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:39 crc kubenswrapper[4689]: I1210 12:15:39.940923 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:39 crc kubenswrapper[4689]: I1210 12:15:39.940941 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:40 crc kubenswrapper[4689]: I1210 12:15:40.455492 4689 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 10 12:15:40 crc kubenswrapper[4689]: I1210 12:15:40.455616 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 12:15:40 crc kubenswrapper[4689]: I1210 12:15:40.583725 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d1b8c4d9f6d097310917ef4b278ca84abeefacc43aa74b983434d997c2befb1f"} Dec 10 12:15:40 crc kubenswrapper[4689]: I1210 12:15:40.583876 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:40 crc kubenswrapper[4689]: I1210 12:15:40.585361 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:40 crc kubenswrapper[4689]: I1210 12:15:40.585410 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:40 crc kubenswrapper[4689]: I1210 12:15:40.585422 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:40 crc kubenswrapper[4689]: I1210 12:15:40.783705 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 10 12:15:41 crc kubenswrapper[4689]: I1210 12:15:41.428695 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 12:15:41 crc kubenswrapper[4689]: I1210 12:15:41.428860 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:41 crc kubenswrapper[4689]: I1210 12:15:41.429921 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:41 crc kubenswrapper[4689]: I1210 12:15:41.429948 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:41 crc kubenswrapper[4689]: I1210 12:15:41.429959 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:41 crc kubenswrapper[4689]: I1210 12:15:41.587265 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:41 crc kubenswrapper[4689]: I1210 12:15:41.588284 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:41 crc kubenswrapper[4689]: I1210 12:15:41.588352 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:41 crc kubenswrapper[4689]: I1210 12:15:41.588375 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:41 crc kubenswrapper[4689]: I1210 12:15:41.860305 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:15:41 crc kubenswrapper[4689]: I1210 12:15:41.860587 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:42 crc kubenswrapper[4689]: I1210 12:15:42.252716 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 12:15:42 crc kubenswrapper[4689]: I1210 12:15:42.253008 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:42 crc kubenswrapper[4689]: I1210 12:15:42.254502 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:42 crc kubenswrapper[4689]: I1210 12:15:42.254555 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:42 crc kubenswrapper[4689]: I1210 12:15:42.254565 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:42 crc kubenswrapper[4689]: I1210 12:15:42.255351 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:42 crc kubenswrapper[4689]: I1210 12:15:42.255383 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:42 crc kubenswrapper[4689]: I1210 12:15:42.255391 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:42 crc kubenswrapper[4689]: I1210 12:15:42.475136 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 10 12:15:42 crc kubenswrapper[4689]: E1210 12:15:42.555160 4689 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 10 12:15:42 crc kubenswrapper[4689]: I1210 12:15:42.590395 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:42 crc kubenswrapper[4689]: I1210 12:15:42.591523 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:42 crc kubenswrapper[4689]: I1210 12:15:42.591578 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:42 crc kubenswrapper[4689]: I1210 12:15:42.591592 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:43 crc kubenswrapper[4689]: I1210 12:15:43.592715 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:43 crc kubenswrapper[4689]: I1210 12:15:43.593532 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:43 crc kubenswrapper[4689]: I1210 12:15:43.593563 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:43 crc kubenswrapper[4689]: I1210 12:15:43.593573 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:44 crc kubenswrapper[4689]: I1210 12:15:44.153722 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 12:15:44 crc kubenswrapper[4689]: I1210 12:15:44.153953 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:44 crc kubenswrapper[4689]: I1210 12:15:44.156048 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:44 crc kubenswrapper[4689]: I1210 12:15:44.156120 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:44 crc kubenswrapper[4689]: I1210 12:15:44.156134 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:44 crc kubenswrapper[4689]: I1210 12:15:44.161165 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 12:15:44 crc kubenswrapper[4689]: I1210 12:15:44.437382 4689 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 10 12:15:44 crc kubenswrapper[4689]: E1210 12:15:44.507059 4689 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 10 12:15:44 crc kubenswrapper[4689]: I1210 12:15:44.595288 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:44 crc kubenswrapper[4689]: I1210 12:15:44.596756 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:44 crc kubenswrapper[4689]: I1210 12:15:44.596819 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:44 crc kubenswrapper[4689]: I1210 12:15:44.596839 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:44 crc kubenswrapper[4689]: I1210 12:15:44.599581 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 12:15:45 crc kubenswrapper[4689]: W1210 12:15:45.010545 4689 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 10 12:15:45 crc kubenswrapper[4689]: I1210 12:15:45.011026 4689 trace.go:236] Trace[423808335]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (10-Dec-2025 12:15:35.009) (total time: 10001ms): Dec 10 12:15:45 crc kubenswrapper[4689]: Trace[423808335]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:15:45.010) Dec 10 12:15:45 crc kubenswrapper[4689]: Trace[423808335]: [10.001431595s] [10.001431595s] END Dec 10 12:15:45 crc kubenswrapper[4689]: E1210 12:15:45.011056 4689 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 10 12:15:45 crc kubenswrapper[4689]: W1210 12:15:45.312589 4689 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 10 12:15:45 crc kubenswrapper[4689]: I1210 12:15:45.312711 4689 trace.go:236] Trace[1842308031]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (10-Dec-2025 12:15:35.310) (total time: 10002ms): Dec 10 12:15:45 crc kubenswrapper[4689]: Trace[1842308031]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (12:15:45.312) Dec 10 12:15:45 crc kubenswrapper[4689]: Trace[1842308031]: [10.002129568s] [10.002129568s] END Dec 10 12:15:45 crc kubenswrapper[4689]: E1210 12:15:45.312743 4689 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 10 12:15:45 crc kubenswrapper[4689]: E1210 12:15:45.444930 4689 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Dec 10 12:15:45 crc kubenswrapper[4689]: I1210 12:15:45.597683 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:45 crc kubenswrapper[4689]: I1210 12:15:45.599004 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:45 crc kubenswrapper[4689]: I1210 12:15:45.599073 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:45 crc kubenswrapper[4689]: I1210 12:15:45.599090 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:45 crc kubenswrapper[4689]: E1210 12:15:45.661302 4689 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 10 12:15:45 crc kubenswrapper[4689]: W1210 12:15:45.894816 4689 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 10 12:15:45 crc kubenswrapper[4689]: I1210 12:15:45.895180 4689 trace.go:236] Trace[1709971403]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (10-Dec-2025 12:15:35.892) (total time: 10002ms): Dec 10 12:15:45 crc kubenswrapper[4689]: Trace[1709971403]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (12:15:45.894) Dec 10 12:15:45 crc kubenswrapper[4689]: Trace[1709971403]: [10.00241086s] [10.00241086s] END Dec 10 12:15:45 crc kubenswrapper[4689]: E1210 12:15:45.895340 4689 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 10 12:15:46 crc kubenswrapper[4689]: W1210 12:15:46.617205 4689 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 10 12:15:46 crc kubenswrapper[4689]: I1210 12:15:46.617305 4689 trace.go:236] Trace[1274077823]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (10-Dec-2025 12:15:36.615) (total time: 10002ms): Dec 10 12:15:46 crc kubenswrapper[4689]: Trace[1274077823]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (12:15:46.617) Dec 10 12:15:46 crc kubenswrapper[4689]: Trace[1274077823]: [10.002253644s] [10.002253644s] END Dec 10 12:15:46 crc kubenswrapper[4689]: E1210 12:15:46.617332 4689 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 10 12:15:48 crc kubenswrapper[4689]: E1210 12:15:48.430612 4689 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.187fd9b0d347aff9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 12:15:32.434685945 +0000 UTC m=+0.222767083,LastTimestamp:2025-12-10 12:15:32.434685945 +0000 UTC m=+0.222767083,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 12:15:48 crc kubenswrapper[4689]: I1210 12:15:48.752315 4689 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 10 12:15:48 crc kubenswrapper[4689]: I1210 12:15:48.861430 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:48 crc kubenswrapper[4689]: I1210 12:15:48.863060 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:48 crc kubenswrapper[4689]: I1210 12:15:48.863118 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:48 crc kubenswrapper[4689]: I1210 12:15:48.863136 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:48 crc kubenswrapper[4689]: I1210 12:15:48.863168 4689 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 10 12:15:49 crc kubenswrapper[4689]: I1210 12:15:49.434701 4689 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 10 12:15:49 crc kubenswrapper[4689]: I1210 12:15:49.435285 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 10 12:15:50 crc kubenswrapper[4689]: I1210 12:15:50.294396 4689 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 10 12:15:50 crc kubenswrapper[4689]: I1210 12:15:50.294459 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 10 12:15:50 crc kubenswrapper[4689]: I1210 12:15:50.455339 4689 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 10 12:15:50 crc kubenswrapper[4689]: I1210 12:15:50.455987 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 10 12:15:50 crc kubenswrapper[4689]: I1210 12:15:50.811090 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 10 12:15:50 crc kubenswrapper[4689]: I1210 12:15:50.811288 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:50 crc kubenswrapper[4689]: I1210 12:15:50.812586 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:50 crc kubenswrapper[4689]: I1210 12:15:50.812788 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:50 crc kubenswrapper[4689]: I1210 12:15:50.812951 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:50 crc kubenswrapper[4689]: I1210 12:15:50.825762 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 10 12:15:51 crc kubenswrapper[4689]: I1210 12:15:51.613454 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:51 crc kubenswrapper[4689]: I1210 12:15:51.615347 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:51 crc kubenswrapper[4689]: I1210 12:15:51.615399 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:51 crc kubenswrapper[4689]: I1210 12:15:51.615408 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:53 crc kubenswrapper[4689]: E1210 12:15:53.966071 4689 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 10 12:15:53 crc kubenswrapper[4689]: I1210 12:15:53.971099 4689 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 10 12:15:54 crc kubenswrapper[4689]: I1210 12:15:54.441568 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:15:54 crc kubenswrapper[4689]: I1210 12:15:54.441805 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:54 crc kubenswrapper[4689]: I1210 12:15:54.443246 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:54 crc kubenswrapper[4689]: I1210 12:15:54.443313 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:54 crc kubenswrapper[4689]: I1210 12:15:54.443336 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:54 crc kubenswrapper[4689]: I1210 12:15:54.449003 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:15:54 crc kubenswrapper[4689]: I1210 12:15:54.971569 4689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 12:15:54 crc kubenswrapper[4689]: I1210 12:15:54.971715 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:15:54 crc kubenswrapper[4689]: I1210 12:15:54.973164 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:15:54 crc kubenswrapper[4689]: I1210 12:15:54.973231 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:15:54 crc kubenswrapper[4689]: I1210 12:15:54.973248 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:15:55 crc kubenswrapper[4689]: I1210 12:15:55.263466 4689 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 10 12:15:55 crc kubenswrapper[4689]: I1210 12:15:55.264316 4689 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 10 12:15:55 crc kubenswrapper[4689]: I1210 12:15:55.266418 4689 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 10 12:15:55 crc kubenswrapper[4689]: E1210 12:15:55.268091 4689 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 10 12:15:55 crc kubenswrapper[4689]: I1210 12:15:55.289255 4689 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Dec 10 12:15:55 crc kubenswrapper[4689]: I1210 12:15:55.346776 4689 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33066->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 10 12:15:55 crc kubenswrapper[4689]: I1210 12:15:55.346840 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33066->192.168.126.11:17697: read: connection reset by peer" Dec 10 12:15:55 crc kubenswrapper[4689]: I1210 12:15:55.346892 4689 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33080->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 10 12:15:55 crc kubenswrapper[4689]: I1210 12:15:55.347015 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33080->192.168.126.11:17697: read: connection reset by peer" Dec 10 12:15:55 crc kubenswrapper[4689]: I1210 12:15:55.347497 4689 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 10 12:15:55 crc kubenswrapper[4689]: I1210 12:15:55.347529 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 10 12:15:55 crc kubenswrapper[4689]: I1210 12:15:55.687419 4689 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 10 12:15:55 crc kubenswrapper[4689]: I1210 12:15:55.966933 4689 apiserver.go:52] "Watching apiserver" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.435778 4689 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.436368 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.436927 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.437371 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.437423 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:15:57 crc kubenswrapper[4689]: E1210 12:15:57.437519 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.437570 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.437836 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:15:57 crc kubenswrapper[4689]: E1210 12:15:57.437948 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.438033 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:15:57 crc kubenswrapper[4689]: E1210 12:15:57.438133 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.439712 4689 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.440879 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.440956 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.442388 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.443033 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.443799 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.446011 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.446066 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.446132 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.446522 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.483410 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.483500 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.483533 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.483571 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.483617 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.483656 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.483688 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.483718 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.483751 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.483783 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.483814 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.483842 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.483871 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.483902 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.483932 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.483960 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.484020 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.484053 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.484085 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.484114 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.484146 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.484181 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.484213 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.484243 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.484273 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.484305 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.484336 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.484367 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.484399 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.484430 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.484467 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.484531 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.484581 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.484644 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.484693 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.484726 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.484758 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.484768 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.484789 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.484833 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.484870 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.485025 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.485086 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.485137 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.485185 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.485234 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.485280 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.485323 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.485372 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.485416 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.485463 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.485508 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.485554 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.485597 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.485639 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.485684 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.485726 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.485772 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.485816 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.485897 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.485947 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.486027 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.486036 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.486049 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.486120 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.486173 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.486223 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.486269 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.486315 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.486360 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.486414 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.486459 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.486506 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.486553 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.486596 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.486635 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.486652 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.486701 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.486751 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.486798 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.486846 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.486889 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.486935 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.487016 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.487067 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.487135 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.487185 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.487236 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.487282 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.487331 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.487383 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.487430 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.487482 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.487529 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.487576 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.487621 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.487670 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.487719 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.487765 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.487809 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.487857 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.487905 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.487954 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.495127 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.495184 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.495223 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.495259 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.495297 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.495335 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.495372 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.495412 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.495451 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.495491 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.495523 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.495557 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.495594 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.495629 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.495664 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.495699 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.495733 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.495767 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.495811 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.495845 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.495879 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.495911 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.495944 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.496005 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.496040 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.496083 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.496118 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.496154 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.496189 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.496221 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.496256 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.496291 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.496335 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.496370 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.496404 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.496438 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.496475 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.496513 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.496548 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.496581 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.496616 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.496648 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.496688 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.496723 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.496758 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.496793 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.496827 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.496863 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.496900 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.496939 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.497001 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.497042 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.497078 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.497115 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.497159 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.497199 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.497236 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.497280 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.497317 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.497355 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.497392 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.497432 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.497466 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.497500 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.497542 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.497576 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.497610 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.497645 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.497682 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.497723 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.497759 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.501576 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.501644 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.501690 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.501729 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.501766 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.501912 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.501963 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.502033 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.502099 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.502138 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.502176 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.502211 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.502248 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.502286 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.502324 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.502366 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.502407 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.502444 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.502481 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.502520 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.502558 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.502593 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.502696 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.502721 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.502744 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.502767 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.502816 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.502850 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.502879 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.502906 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.502931 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.502953 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.502997 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.503027 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.503068 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.503093 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.503117 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.503143 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.503168 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.503195 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.503253 4689 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.503271 4689 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.503284 4689 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.503299 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.506743 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.543451 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.487068 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.487454 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.488692 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.489650 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.490070 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.490752 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.491191 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.491210 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.491667 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.499795 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.500142 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.501143 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.504056 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.506099 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.515348 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.515580 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.515580 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.516034 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.516155 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.518275 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.518721 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.518944 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.519854 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.520221 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.520625 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.521471 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.521178 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.521579 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.521795 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.522006 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.522433 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.522537 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.522888 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.523059 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.523293 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.523337 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.523524 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.524044 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.524259 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.524479 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.524555 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.524768 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.524880 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.526334 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.526850 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.527039 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.527559 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.527904 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.528192 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.528888 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.530416 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.530428 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.530714 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.531112 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.531365 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.531558 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.534108 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.534372 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.534601 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.534875 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.535192 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.535988 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.541013 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.541668 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.542669 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.543265 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.543611 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.543885 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.543952 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.544756 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.545034 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.545054 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.548836 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.551356 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.551580 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.551997 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.552165 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.552331 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.553230 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.553392 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.554094 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.554673 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.559076 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.559355 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.559657 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.559689 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.560021 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.560325 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.560334 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.560548 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.560659 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.560821 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.561015 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.561037 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.561208 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.561348 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.562225 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.562253 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.562567 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.562674 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.562951 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.563130 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.563293 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.563553 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.564139 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.564197 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.568069 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.568192 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.568401 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.568467 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.568883 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.568899 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.568927 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.569004 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.569033 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: E1210 12:15:57.569228 4689 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 12:15:57 crc kubenswrapper[4689]: E1210 12:15:57.569274 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:15:58.069100547 +0000 UTC m=+25.857181685 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.569321 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: E1210 12:15:57.569348 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 12:15:58.069336192 +0000 UTC m=+25.857417330 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.569515 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.569576 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.569933 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.570164 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.570245 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.570468 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.570483 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.570805 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.570865 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.570936 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.571184 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: E1210 12:15:57.571365 4689 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 12:15:57 crc kubenswrapper[4689]: E1210 12:15:57.579703 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 12:15:58.079673609 +0000 UTC m=+25.867754747 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.579705 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.576845 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.569281 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.571519 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.571685 4689 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.571762 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.582311 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.582419 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.582468 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.582518 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.572121 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.572149 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.572293 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.572421 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.572484 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.573012 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.574633 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.574882 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.575718 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.569286 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.575854 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.575933 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.577663 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.577782 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.578073 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.578112 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.578226 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.578480 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.579345 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.579441 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.583144 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.583554 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.585225 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.585320 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.585386 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.585420 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.585493 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.586258 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.587803 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.587906 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.588491 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.589055 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.589177 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.591883 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.592471 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.592489 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.593963 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.594321 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.595523 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.595807 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.595855 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.596121 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.596720 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.596728 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.596753 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.596965 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.597194 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.597316 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.597549 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.597894 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.604539 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.604629 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.604702 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.604718 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.604730 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.604742 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.604758 4689 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.604770 4689 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.604781 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.604794 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.604805 4689 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.604816 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.604827 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.604840 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.604853 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.604865 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.604877 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.604890 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.604901 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.604913 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.604924 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.604935 4689 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.604946 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.604958 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.604988 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605001 4689 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605012 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605023 4689 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605036 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605046 4689 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605058 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605070 4689 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605080 4689 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605094 4689 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605106 4689 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605121 4689 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605133 4689 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605145 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605157 4689 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605170 4689 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605182 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605194 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605207 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605218 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605230 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605243 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605255 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605267 4689 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605281 4689 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605292 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605303 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605315 4689 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605326 4689 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605338 4689 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605350 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605362 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605374 4689 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605385 4689 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605396 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605409 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605420 4689 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605431 4689 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605443 4689 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605456 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605556 4689 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605570 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605582 4689 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605593 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605605 4689 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605615 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605627 4689 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605637 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605648 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605661 4689 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605673 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605684 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605695 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605708 4689 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605720 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605732 4689 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605743 4689 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605754 4689 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605767 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605778 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605789 4689 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605801 4689 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605813 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605824 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605836 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605847 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605857 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605868 4689 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605879 4689 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605890 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605900 4689 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605911 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605921 4689 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605932 4689 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605943 4689 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605954 4689 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605965 4689 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.605995 4689 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606142 4689 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606160 4689 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606172 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606183 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606193 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606205 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606216 4689 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606228 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606240 4689 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606251 4689 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606262 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606274 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606287 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606299 4689 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606309 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606321 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606332 4689 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606342 4689 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606353 4689 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606367 4689 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606377 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606389 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606400 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606411 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606423 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606436 4689 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606448 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606461 4689 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606472 4689 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606485 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606495 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606506 4689 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606517 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606528 4689 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606539 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606550 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606561 4689 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606571 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606582 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606593 4689 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606604 4689 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606614 4689 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606626 4689 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606643 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606655 4689 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606666 4689 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606679 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606693 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606705 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606716 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606727 4689 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606738 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606749 4689 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606761 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606774 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606785 4689 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606796 4689 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606806 4689 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606817 4689 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606829 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606840 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606852 4689 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606864 4689 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606879 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606892 4689 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606903 4689 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606915 4689 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606927 4689 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606939 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606950 4689 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606960 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.606988 4689 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.607000 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.607011 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.607022 4689 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.607034 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.607045 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.607055 4689 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.607066 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.607077 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.607088 4689 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.607099 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.607155 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.607209 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.618592 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.618892 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.618903 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 12:15:57 crc kubenswrapper[4689]: E1210 12:15:57.618916 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 12:15:57 crc kubenswrapper[4689]: E1210 12:15:57.619068 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 12:15:57 crc kubenswrapper[4689]: E1210 12:15:57.618997 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 12:15:57 crc kubenswrapper[4689]: E1210 12:15:57.619081 4689 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 12:15:57 crc kubenswrapper[4689]: E1210 12:15:57.619105 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 12:15:57 crc kubenswrapper[4689]: E1210 12:15:57.619123 4689 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 12:15:57 crc kubenswrapper[4689]: E1210 12:15:57.619135 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 12:15:58.119117103 +0000 UTC m=+25.907198241 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 12:15:57 crc kubenswrapper[4689]: E1210 12:15:57.619174 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 12:15:58.119157784 +0000 UTC m=+25.907238922 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.619565 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.632311 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.632806 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.635187 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.636546 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.639286 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.641629 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.647403 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.652626 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.659470 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.661041 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.679835 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.689351 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.698317 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.707516 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.707714 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.707748 4689 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.707762 4689 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.707776 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.707789 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.707800 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.707812 4689 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.707824 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.723634 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.738479 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.751100 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.765299 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.789624 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 12:15:57 crc kubenswrapper[4689]: W1210 12:15:57.806165 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-f4a5aa7fff88cc07dc2f0bba2dcf7a0b6f2b087f95440734b64551096df807ec WatchSource:0}: Error finding container f4a5aa7fff88cc07dc2f0bba2dcf7a0b6f2b087f95440734b64551096df807ec: Status 404 returned error can't find the container with id f4a5aa7fff88cc07dc2f0bba2dcf7a0b6f2b087f95440734b64551096df807ec Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.815300 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.870717 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.981560 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.984426 4689 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3" exitCode=255 Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.984494 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3"} Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.985707 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f4a5aa7fff88cc07dc2f0bba2dcf7a0b6f2b087f95440734b64551096df807ec"} Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.997828 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 10 12:15:57 crc kubenswrapper[4689]: I1210 12:15:57.998150 4689 scope.go:117] "RemoveContainer" containerID="cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.001958 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.013480 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.033395 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.055356 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.070214 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.090725 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.106831 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.111174 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:15:58 crc kubenswrapper[4689]: E1210 12:15:58.111295 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:15:59.11127463 +0000 UTC m=+26.899355768 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.111447 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:15:58 crc kubenswrapper[4689]: E1210 12:15:58.111586 4689 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 12:15:58 crc kubenswrapper[4689]: E1210 12:15:58.111630 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 12:15:59.111619908 +0000 UTC m=+26.899701046 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 12:15:58 crc kubenswrapper[4689]: E1210 12:15:58.111653 4689 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 12:15:58 crc kubenswrapper[4689]: E1210 12:15:58.111716 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 12:15:59.11169995 +0000 UTC m=+26.899781098 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.111654 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.212628 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.212691 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:15:58 crc kubenswrapper[4689]: E1210 12:15:58.212852 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 12:15:58 crc kubenswrapper[4689]: E1210 12:15:58.212877 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 12:15:58 crc kubenswrapper[4689]: E1210 12:15:58.212883 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 12:15:58 crc kubenswrapper[4689]: E1210 12:15:58.212944 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 12:15:58 crc kubenswrapper[4689]: E1210 12:15:58.212961 4689 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 12:15:58 crc kubenswrapper[4689]: E1210 12:15:58.212895 4689 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 12:15:58 crc kubenswrapper[4689]: E1210 12:15:58.213051 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 12:15:59.213027644 +0000 UTC m=+27.001108872 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 12:15:58 crc kubenswrapper[4689]: E1210 12:15:58.213075 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 12:15:59.213065625 +0000 UTC m=+27.001146903 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.497292 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:15:58 crc kubenswrapper[4689]: E1210 12:15:58.497423 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.500692 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.501484 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.502792 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.503518 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.504640 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.505211 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.505896 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.507015 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.507747 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.508991 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.509642 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.510888 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.511471 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.512135 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.513416 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.514127 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.515692 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.516255 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.516857 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.517947 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.518505 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.519645 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.520185 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.521529 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.522142 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.522854 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.524071 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.524586 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.527137 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.528171 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.529087 4689 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.529276 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.532788 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.533841 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.534561 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.536506 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.539085 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.539730 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.541499 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.542421 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.543583 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.544392 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.545693 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.546430 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.547413 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.547933 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.549191 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.549937 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.551004 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.551762 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.552628 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.553308 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.554172 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.554964 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.989596 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392"} Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.994101 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef"} Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.994190 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf"} Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.994201 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"947417047f6e81afda956120e119fc70a001b9becdd3f9c007060d440e589a97"} Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.995685 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"3b6434be7c40b1e47fbf0e94cd931883e288ac59727bc677f216f1c6829267cb"} Dec 10 12:15:58 crc kubenswrapper[4689]: I1210 12:15:58.999257 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 10 12:15:59 crc kubenswrapper[4689]: I1210 12:15:59.005476 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da"} Dec 10 12:15:59 crc kubenswrapper[4689]: I1210 12:15:59.005631 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:15:59 crc kubenswrapper[4689]: I1210 12:15:59.011774 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:15:59Z is after 2025-08-24T17:21:41Z" Dec 10 12:15:59 crc kubenswrapper[4689]: I1210 12:15:59.029212 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:15:59Z is after 2025-08-24T17:21:41Z" Dec 10 12:15:59 crc kubenswrapper[4689]: I1210 12:15:59.044740 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:15:59Z is after 2025-08-24T17:21:41Z" Dec 10 12:15:59 crc kubenswrapper[4689]: I1210 12:15:59.067561 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:15:59Z is after 2025-08-24T17:21:41Z" Dec 10 12:15:59 crc kubenswrapper[4689]: I1210 12:15:59.090015 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:15:59Z is after 2025-08-24T17:21:41Z" Dec 10 12:15:59 crc kubenswrapper[4689]: I1210 12:15:59.110823 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:15:59Z is after 2025-08-24T17:21:41Z" Dec 10 12:15:59 crc kubenswrapper[4689]: I1210 12:15:59.119535 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:15:59 crc kubenswrapper[4689]: E1210 12:15:59.119697 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:16:01.119674496 +0000 UTC m=+28.907755644 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:15:59 crc kubenswrapper[4689]: I1210 12:15:59.119746 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:15:59 crc kubenswrapper[4689]: I1210 12:15:59.119832 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:15:59 crc kubenswrapper[4689]: E1210 12:15:59.119905 4689 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 12:15:59 crc kubenswrapper[4689]: E1210 12:15:59.119945 4689 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 12:15:59 crc kubenswrapper[4689]: E1210 12:15:59.120011 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 12:16:01.119991213 +0000 UTC m=+28.908072391 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 12:15:59 crc kubenswrapper[4689]: E1210 12:15:59.120033 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 12:16:01.120023484 +0000 UTC m=+28.908104722 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 12:15:59 crc kubenswrapper[4689]: I1210 12:15:59.125669 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:15:59Z is after 2025-08-24T17:21:41Z" Dec 10 12:15:59 crc kubenswrapper[4689]: I1210 12:15:59.141434 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:15:59Z is after 2025-08-24T17:21:41Z" Dec 10 12:15:59 crc kubenswrapper[4689]: I1210 12:15:59.156086 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:15:59Z is after 2025-08-24T17:21:41Z" Dec 10 12:15:59 crc kubenswrapper[4689]: I1210 12:15:59.174786 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:15:59Z is after 2025-08-24T17:21:41Z" Dec 10 12:15:59 crc kubenswrapper[4689]: I1210 12:15:59.187557 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:15:59Z is after 2025-08-24T17:21:41Z" Dec 10 12:15:59 crc kubenswrapper[4689]: I1210 12:15:59.205215 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:15:59Z is after 2025-08-24T17:21:41Z" Dec 10 12:15:59 crc kubenswrapper[4689]: I1210 12:15:59.220417 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:15:59Z is after 2025-08-24T17:21:41Z" Dec 10 12:15:59 crc kubenswrapper[4689]: I1210 12:15:59.220605 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:15:59 crc kubenswrapper[4689]: I1210 12:15:59.220645 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:15:59 crc kubenswrapper[4689]: E1210 12:15:59.220773 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 12:15:59 crc kubenswrapper[4689]: E1210 12:15:59.220778 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 12:15:59 crc kubenswrapper[4689]: E1210 12:15:59.220791 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 12:15:59 crc kubenswrapper[4689]: E1210 12:15:59.220800 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 12:15:59 crc kubenswrapper[4689]: E1210 12:15:59.220805 4689 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 12:15:59 crc kubenswrapper[4689]: E1210 12:15:59.220814 4689 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 12:15:59 crc kubenswrapper[4689]: E1210 12:15:59.220854 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 12:16:01.220839566 +0000 UTC m=+29.008920704 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 12:15:59 crc kubenswrapper[4689]: E1210 12:15:59.220870 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 12:16:01.220864397 +0000 UTC m=+29.008945535 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 12:15:59 crc kubenswrapper[4689]: I1210 12:15:59.231795 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:15:59Z is after 2025-08-24T17:21:41Z" Dec 10 12:15:59 crc kubenswrapper[4689]: I1210 12:15:59.241496 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:15:59Z is after 2025-08-24T17:21:41Z" Dec 10 12:15:59 crc kubenswrapper[4689]: I1210 12:15:59.252664 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:15:59Z is after 2025-08-24T17:21:41Z" Dec 10 12:15:59 crc kubenswrapper[4689]: I1210 12:15:59.497297 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:15:59 crc kubenswrapper[4689]: I1210 12:15:59.497343 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:15:59 crc kubenswrapper[4689]: E1210 12:15:59.497452 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:15:59 crc kubenswrapper[4689]: E1210 12:15:59.497536 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:16:00 crc kubenswrapper[4689]: I1210 12:16:00.497966 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:00 crc kubenswrapper[4689]: E1210 12:16:00.498189 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:16:00 crc kubenswrapper[4689]: I1210 12:16:00.885324 4689 csr.go:261] certificate signing request csr-sfsdb is approved, waiting to be issued Dec 10 12:16:00 crc kubenswrapper[4689]: I1210 12:16:00.896353 4689 csr.go:257] certificate signing request csr-sfsdb is issued Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.009364 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b"} Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.021395 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.041367 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.056021 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.071538 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.082910 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.092987 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.104500 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.117015 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.135242 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.135361 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.135390 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:01 crc kubenswrapper[4689]: E1210 12:16:01.135417 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:16:05.135386394 +0000 UTC m=+32.923467532 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:16:01 crc kubenswrapper[4689]: E1210 12:16:01.135477 4689 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 12:16:01 crc kubenswrapper[4689]: E1210 12:16:01.135547 4689 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 12:16:01 crc kubenswrapper[4689]: E1210 12:16:01.135553 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 12:16:05.135535867 +0000 UTC m=+32.923617015 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 12:16:01 crc kubenswrapper[4689]: E1210 12:16:01.135622 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 12:16:05.135609009 +0000 UTC m=+32.923690247 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.236167 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.236215 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:01 crc kubenswrapper[4689]: E1210 12:16:01.236335 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 12:16:01 crc kubenswrapper[4689]: E1210 12:16:01.236367 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 12:16:01 crc kubenswrapper[4689]: E1210 12:16:01.236382 4689 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 12:16:01 crc kubenswrapper[4689]: E1210 12:16:01.236335 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 12:16:01 crc kubenswrapper[4689]: E1210 12:16:01.236454 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 12:16:01 crc kubenswrapper[4689]: E1210 12:16:01.236466 4689 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 12:16:01 crc kubenswrapper[4689]: E1210 12:16:01.236434 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 12:16:05.236417391 +0000 UTC m=+33.024498529 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 12:16:01 crc kubenswrapper[4689]: E1210 12:16:01.236504 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 12:16:05.236493712 +0000 UTC m=+33.024574850 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.374895 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-77s7q"] Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.375226 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-db6zk"] Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.375425 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-77s7q" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.375548 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.377096 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.377278 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.377284 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.377422 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.377423 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.377509 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.377946 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.380619 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.392700 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.403862 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.418648 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.434305 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.437743 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a41ebdcd-910f-4669-992d-296e1a92162b-proxy-tls\") pod \"machine-config-daemon-db6zk\" (UID: \"a41ebdcd-910f-4669-992d-296e1a92162b\") " pod="openshift-machine-config-operator/machine-config-daemon-db6zk" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.437789 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a41ebdcd-910f-4669-992d-296e1a92162b-rootfs\") pod \"machine-config-daemon-db6zk\" (UID: \"a41ebdcd-910f-4669-992d-296e1a92162b\") " pod="openshift-machine-config-operator/machine-config-daemon-db6zk" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.437805 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a41ebdcd-910f-4669-992d-296e1a92162b-mcd-auth-proxy-config\") pod \"machine-config-daemon-db6zk\" (UID: \"a41ebdcd-910f-4669-992d-296e1a92162b\") " pod="openshift-machine-config-operator/machine-config-daemon-db6zk" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.437828 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwnnt\" (UniqueName: \"kubernetes.io/projected/9f25cd73-d88f-4d52-93b6-483589dc4ac4-kube-api-access-kwnnt\") pod \"node-resolver-77s7q\" (UID: \"9f25cd73-d88f-4d52-93b6-483589dc4ac4\") " pod="openshift-dns/node-resolver-77s7q" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.437853 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68btn\" (UniqueName: \"kubernetes.io/projected/a41ebdcd-910f-4669-992d-296e1a92162b-kube-api-access-68btn\") pod \"machine-config-daemon-db6zk\" (UID: \"a41ebdcd-910f-4669-992d-296e1a92162b\") " pod="openshift-machine-config-operator/machine-config-daemon-db6zk" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.437872 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9f25cd73-d88f-4d52-93b6-483589dc4ac4-hosts-file\") pod \"node-resolver-77s7q\" (UID: \"9f25cd73-d88f-4d52-93b6-483589dc4ac4\") " pod="openshift-dns/node-resolver-77s7q" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.450730 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.463084 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.475071 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.485378 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.495259 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.497288 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.497324 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:01 crc kubenswrapper[4689]: E1210 12:16:01.497392 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:16:01 crc kubenswrapper[4689]: E1210 12:16:01.497554 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.508141 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.518102 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.535445 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.538691 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a41ebdcd-910f-4669-992d-296e1a92162b-proxy-tls\") pod \"machine-config-daemon-db6zk\" (UID: \"a41ebdcd-910f-4669-992d-296e1a92162b\") " pod="openshift-machine-config-operator/machine-config-daemon-db6zk" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.538739 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a41ebdcd-910f-4669-992d-296e1a92162b-rootfs\") pod \"machine-config-daemon-db6zk\" (UID: \"a41ebdcd-910f-4669-992d-296e1a92162b\") " pod="openshift-machine-config-operator/machine-config-daemon-db6zk" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.538754 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a41ebdcd-910f-4669-992d-296e1a92162b-mcd-auth-proxy-config\") pod \"machine-config-daemon-db6zk\" (UID: \"a41ebdcd-910f-4669-992d-296e1a92162b\") " pod="openshift-machine-config-operator/machine-config-daemon-db6zk" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.538780 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwnnt\" (UniqueName: \"kubernetes.io/projected/9f25cd73-d88f-4d52-93b6-483589dc4ac4-kube-api-access-kwnnt\") pod \"node-resolver-77s7q\" (UID: \"9f25cd73-d88f-4d52-93b6-483589dc4ac4\") " pod="openshift-dns/node-resolver-77s7q" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.538802 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68btn\" (UniqueName: \"kubernetes.io/projected/a41ebdcd-910f-4669-992d-296e1a92162b-kube-api-access-68btn\") pod \"machine-config-daemon-db6zk\" (UID: \"a41ebdcd-910f-4669-992d-296e1a92162b\") " pod="openshift-machine-config-operator/machine-config-daemon-db6zk" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.538817 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9f25cd73-d88f-4d52-93b6-483589dc4ac4-hosts-file\") pod \"node-resolver-77s7q\" (UID: \"9f25cd73-d88f-4d52-93b6-483589dc4ac4\") " pod="openshift-dns/node-resolver-77s7q" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.538868 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a41ebdcd-910f-4669-992d-296e1a92162b-rootfs\") pod \"machine-config-daemon-db6zk\" (UID: \"a41ebdcd-910f-4669-992d-296e1a92162b\") " pod="openshift-machine-config-operator/machine-config-daemon-db6zk" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.538882 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9f25cd73-d88f-4d52-93b6-483589dc4ac4-hosts-file\") pod \"node-resolver-77s7q\" (UID: \"9f25cd73-d88f-4d52-93b6-483589dc4ac4\") " pod="openshift-dns/node-resolver-77s7q" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.539448 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a41ebdcd-910f-4669-992d-296e1a92162b-mcd-auth-proxy-config\") pod \"machine-config-daemon-db6zk\" (UID: \"a41ebdcd-910f-4669-992d-296e1a92162b\") " pod="openshift-machine-config-operator/machine-config-daemon-db6zk" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.544491 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a41ebdcd-910f-4669-992d-296e1a92162b-proxy-tls\") pod \"machine-config-daemon-db6zk\" (UID: \"a41ebdcd-910f-4669-992d-296e1a92162b\") " pod="openshift-machine-config-operator/machine-config-daemon-db6zk" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.552141 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.563666 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68btn\" (UniqueName: \"kubernetes.io/projected/a41ebdcd-910f-4669-992d-296e1a92162b-kube-api-access-68btn\") pod \"machine-config-daemon-db6zk\" (UID: \"a41ebdcd-910f-4669-992d-296e1a92162b\") " pod="openshift-machine-config-operator/machine-config-daemon-db6zk" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.565424 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwnnt\" (UniqueName: \"kubernetes.io/projected/9f25cd73-d88f-4d52-93b6-483589dc4ac4-kube-api-access-kwnnt\") pod \"node-resolver-77s7q\" (UID: \"9f25cd73-d88f-4d52-93b6-483589dc4ac4\") " pod="openshift-dns/node-resolver-77s7q" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.572815 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.588843 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.599877 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.614629 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.624379 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.638032 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.666600 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.668597 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.670073 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.670419 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.670430 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.670517 4689 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.688446 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-77s7q" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.689220 4689 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.689404 4689 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.690233 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.690284 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.690304 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.690324 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.690337 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:01Z","lastTransitionTime":"2025-12-10T12:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.695526 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" Dec 10 12:16:01 crc kubenswrapper[4689]: W1210 12:16:01.711130 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda41ebdcd_910f_4669_992d_296e1a92162b.slice/crio-17bdf391e2db11d27e0af2facbcb042d89e5a8a822125e9b4fb5f9907410589e WatchSource:0}: Error finding container 17bdf391e2db11d27e0af2facbcb042d89e5a8a822125e9b4fb5f9907410589e: Status 404 returned error can't find the container with id 17bdf391e2db11d27e0af2facbcb042d89e5a8a822125e9b4fb5f9907410589e Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.746693 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-r6wmt"] Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.747063 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.750214 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.750638 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.756309 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.756866 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-k7kbl"] Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.757534 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.757854 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.758041 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.765609 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-d5s24"] Dec 10 12:16:01 crc kubenswrapper[4689]: W1210 12:16:01.765993 4689 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: configmaps "default-cni-sysctl-allowlist" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Dec 10 12:16:01 crc kubenswrapper[4689]: E1210 12:16:01.766037 4689 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"default-cni-sysctl-allowlist\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.766686 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.779248 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 10 12:16:01 crc kubenswrapper[4689]: E1210 12:16:01.779315 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.783311 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.787528 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.787620 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.787825 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.787884 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.787917 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.788277 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.788430 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.793101 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.793134 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.793180 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.793206 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.793220 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:01Z","lastTransitionTime":"2025-12-10T12:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:01 crc kubenswrapper[4689]: E1210 12:16:01.817077 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.818295 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.819718 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.819739 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.819747 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.819759 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.819768 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:01Z","lastTransitionTime":"2025-12-10T12:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.829590 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: E1210 12:16:01.833654 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.836249 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.836274 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.836283 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.836295 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.836304 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:01Z","lastTransitionTime":"2025-12-10T12:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.840849 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841156 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-cni-netd\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841179 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-cni-bin\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841196 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jq2j\" (UniqueName: \"kubernetes.io/projected/bf732b59-88ab-4673-9d0c-e479b9138b30-kube-api-access-6jq2j\") pod \"multus-additional-cni-plugins-k7kbl\" (UID: \"bf732b59-88ab-4673-9d0c-e479b9138b30\") " pod="openshift-multus/multus-additional-cni-plugins-k7kbl" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841211 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bf732b59-88ab-4673-9d0c-e479b9138b30-cni-binary-copy\") pod \"multus-additional-cni-plugins-k7kbl\" (UID: \"bf732b59-88ab-4673-9d0c-e479b9138b30\") " pod="openshift-multus/multus-additional-cni-plugins-k7kbl" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841227 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-slash\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841240 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-node-log\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841263 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgcnl\" (UniqueName: \"kubernetes.io/projected/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-kube-api-access-hgcnl\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841276 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-log-socket\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841291 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-ovnkube-config\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841314 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-kubelet\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841329 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-run-openvswitch\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841343 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-run-ovn-kubernetes\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841357 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-host-var-lib-cni-multus\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841371 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bf732b59-88ab-4673-9d0c-e479b9138b30-os-release\") pod \"multus-additional-cni-plugins-k7kbl\" (UID: \"bf732b59-88ab-4673-9d0c-e479b9138b30\") " pod="openshift-multus/multus-additional-cni-plugins-k7kbl" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841385 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-etc-openvswitch\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841398 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-run-ovn\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841412 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-host-var-lib-kubelet\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841426 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-multus-conf-dir\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841439 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmrn4\" (UniqueName: \"kubernetes.io/projected/3713b4f8-2ee3-4078-859a-dca17076f9a6-kube-api-access-lmrn4\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841453 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-run-systemd\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841466 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3713b4f8-2ee3-4078-859a-dca17076f9a6-cni-binary-copy\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841480 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3713b4f8-2ee3-4078-859a-dca17076f9a6-multus-daemon-config\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841496 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-etc-kubernetes\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841511 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bf732b59-88ab-4673-9d0c-e479b9138b30-cnibin\") pod \"multus-additional-cni-plugins-k7kbl\" (UID: \"bf732b59-88ab-4673-9d0c-e479b9138b30\") " pod="openshift-multus/multus-additional-cni-plugins-k7kbl" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841526 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/bf732b59-88ab-4673-9d0c-e479b9138b30-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-k7kbl\" (UID: \"bf732b59-88ab-4673-9d0c-e479b9138b30\") " pod="openshift-multus/multus-additional-cni-plugins-k7kbl" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841539 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-os-release\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841552 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-host-run-netns\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841566 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-hostroot\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841579 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bf732b59-88ab-4673-9d0c-e479b9138b30-tuning-conf-dir\") pod \"multus-additional-cni-plugins-k7kbl\" (UID: \"bf732b59-88ab-4673-9d0c-e479b9138b30\") " pod="openshift-multus/multus-additional-cni-plugins-k7kbl" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841591 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-systemd-units\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841605 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-env-overrides\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841626 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bf732b59-88ab-4673-9d0c-e479b9138b30-system-cni-dir\") pod \"multus-additional-cni-plugins-k7kbl\" (UID: \"bf732b59-88ab-4673-9d0c-e479b9138b30\") " pod="openshift-multus/multus-additional-cni-plugins-k7kbl" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841641 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841657 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-system-cni-dir\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841671 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-multus-cni-dir\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841684 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-cnibin\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841697 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-multus-socket-dir-parent\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841718 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-run-netns\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841731 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-var-lib-openvswitch\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841765 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-ovnkube-script-lib\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841781 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-host-run-multus-certs\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841795 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-host-run-k8s-cni-cncf-io\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841809 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-host-var-lib-cni-bin\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.841824 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-ovn-node-metrics-cert\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: E1210 12:16:01.849172 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.855194 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.855230 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.855240 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.855258 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.855270 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:01Z","lastTransitionTime":"2025-12-10T12:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.859853 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: E1210 12:16:01.874271 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: E1210 12:16:01.874428 4689 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.877461 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.877656 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.877745 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.877841 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.877942 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:01Z","lastTransitionTime":"2025-12-10T12:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.883617 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.897280 4689 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-12-10 12:11:00 +0000 UTC, rotation deadline is 2026-10-01 02:25:02.206110673 +0000 UTC Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.897371 4689 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7070h9m0.308742264s for next certificate rotation Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.900685 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.915061 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.929452 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.942495 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.943339 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jq2j\" (UniqueName: \"kubernetes.io/projected/bf732b59-88ab-4673-9d0c-e479b9138b30-kube-api-access-6jq2j\") pod \"multus-additional-cni-plugins-k7kbl\" (UID: \"bf732b59-88ab-4673-9d0c-e479b9138b30\") " pod="openshift-multus/multus-additional-cni-plugins-k7kbl" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.943488 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-cni-bin\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.943610 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bf732b59-88ab-4673-9d0c-e479b9138b30-cni-binary-copy\") pod \"multus-additional-cni-plugins-k7kbl\" (UID: \"bf732b59-88ab-4673-9d0c-e479b9138b30\") " pod="openshift-multus/multus-additional-cni-plugins-k7kbl" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.943677 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-cni-bin\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.943712 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-slash\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.943874 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-node-log\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.943954 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-node-log\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.943818 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-slash\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.944114 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgcnl\" (UniqueName: \"kubernetes.io/projected/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-kube-api-access-hgcnl\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.944237 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-kubelet\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.944331 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-kubelet\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.944349 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bf732b59-88ab-4673-9d0c-e479b9138b30-cni-binary-copy\") pod \"multus-additional-cni-plugins-k7kbl\" (UID: \"bf732b59-88ab-4673-9d0c-e479b9138b30\") " pod="openshift-multus/multus-additional-cni-plugins-k7kbl" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.944456 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-run-openvswitch\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.944551 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-run-openvswitch\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.944668 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-log-socket\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.944782 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-log-socket\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.944795 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-ovnkube-config\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.944949 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bf732b59-88ab-4673-9d0c-e479b9138b30-os-release\") pod \"multus-additional-cni-plugins-k7kbl\" (UID: \"bf732b59-88ab-4673-9d0c-e479b9138b30\") " pod="openshift-multus/multus-additional-cni-plugins-k7kbl" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.945042 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bf732b59-88ab-4673-9d0c-e479b9138b30-os-release\") pod \"multus-additional-cni-plugins-k7kbl\" (UID: \"bf732b59-88ab-4673-9d0c-e479b9138b30\") " pod="openshift-multus/multus-additional-cni-plugins-k7kbl" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.945126 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-etc-openvswitch\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.945235 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-run-ovn-kubernetes\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.945334 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-host-var-lib-cni-multus\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.945430 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-run-ovn\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.945514 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-ovnkube-config\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.945387 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-host-var-lib-cni-multus\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.945544 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-run-ovn\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.945352 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-run-ovn-kubernetes\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.945520 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-run-systemd\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.945175 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-etc-openvswitch\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.945618 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3713b4f8-2ee3-4078-859a-dca17076f9a6-cni-binary-copy\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.945646 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-host-var-lib-kubelet\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.945668 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-multus-conf-dir\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.945689 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmrn4\" (UniqueName: \"kubernetes.io/projected/3713b4f8-2ee3-4078-859a-dca17076f9a6-kube-api-access-lmrn4\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.945719 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-host-var-lib-kubelet\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.945738 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bf732b59-88ab-4673-9d0c-e479b9138b30-cnibin\") pod \"multus-additional-cni-plugins-k7kbl\" (UID: \"bf732b59-88ab-4673-9d0c-e479b9138b30\") " pod="openshift-multus/multus-additional-cni-plugins-k7kbl" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.945755 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-multus-conf-dir\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.945769 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/bf732b59-88ab-4673-9d0c-e479b9138b30-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-k7kbl\" (UID: \"bf732b59-88ab-4673-9d0c-e479b9138b30\") " pod="openshift-multus/multus-additional-cni-plugins-k7kbl" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.945868 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bf732b59-88ab-4673-9d0c-e479b9138b30-cnibin\") pod \"multus-additional-cni-plugins-k7kbl\" (UID: \"bf732b59-88ab-4673-9d0c-e479b9138b30\") " pod="openshift-multus/multus-additional-cni-plugins-k7kbl" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.945890 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-os-release\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.945935 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-host-run-netns\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.945959 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3713b4f8-2ee3-4078-859a-dca17076f9a6-multus-daemon-config\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.945993 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-etc-kubernetes\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946012 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-hostroot\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946030 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-host-run-netns\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946080 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bf732b59-88ab-4673-9d0c-e479b9138b30-system-cni-dir\") pod \"multus-additional-cni-plugins-k7kbl\" (UID: \"bf732b59-88ab-4673-9d0c-e479b9138b30\") " pod="openshift-multus/multus-additional-cni-plugins-k7kbl" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946092 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-hostroot\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946104 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bf732b59-88ab-4673-9d0c-e479b9138b30-tuning-conf-dir\") pod \"multus-additional-cni-plugins-k7kbl\" (UID: \"bf732b59-88ab-4673-9d0c-e479b9138b30\") " pod="openshift-multus/multus-additional-cni-plugins-k7kbl" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946121 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-systemd-units\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946126 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-etc-kubernetes\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946138 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-env-overrides\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946162 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bf732b59-88ab-4673-9d0c-e479b9138b30-system-cni-dir\") pod \"multus-additional-cni-plugins-k7kbl\" (UID: \"bf732b59-88ab-4673-9d0c-e479b9138b30\") " pod="openshift-multus/multus-additional-cni-plugins-k7kbl" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946168 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946192 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946217 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-system-cni-dir\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946249 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-run-netns\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946273 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-var-lib-openvswitch\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946294 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-multus-cni-dir\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946304 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3713b4f8-2ee3-4078-859a-dca17076f9a6-cni-binary-copy\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946316 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-cnibin\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946348 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-multus-socket-dir-parent\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946355 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-cnibin\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946365 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bf732b59-88ab-4673-9d0c-e479b9138b30-tuning-conf-dir\") pod \"multus-additional-cni-plugins-k7kbl\" (UID: \"bf732b59-88ab-4673-9d0c-e479b9138b30\") " pod="openshift-multus/multus-additional-cni-plugins-k7kbl" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946012 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-os-release\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946386 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-ovnkube-script-lib\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946413 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-host-run-multus-certs\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946432 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-host-run-k8s-cni-cncf-io\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946449 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-host-var-lib-cni-bin\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946483 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-ovn-node-metrics-cert\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946501 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-cni-netd\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946511 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-system-cni-dir\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946223 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-systemd-units\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946568 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-host-var-lib-cni-bin\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946582 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-env-overrides\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946599 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-host-run-multus-certs\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946608 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3713b4f8-2ee3-4078-859a-dca17076f9a6-multus-daemon-config\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946629 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-cni-netd\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946633 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-host-run-k8s-cni-cncf-io\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946642 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-run-netns\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946656 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-var-lib-openvswitch\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946793 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-multus-socket-dir-parent\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946848 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3713b4f8-2ee3-4078-859a-dca17076f9a6-multus-cni-dir\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.946853 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-ovnkube-script-lib\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.947541 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-run-systemd\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.953519 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-ovn-node-metrics-cert\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.959837 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.963122 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmrn4\" (UniqueName: \"kubernetes.io/projected/3713b4f8-2ee3-4078-859a-dca17076f9a6-kube-api-access-lmrn4\") pod \"multus-r6wmt\" (UID: \"3713b4f8-2ee3-4078-859a-dca17076f9a6\") " pod="openshift-multus/multus-r6wmt" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.966595 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jq2j\" (UniqueName: \"kubernetes.io/projected/bf732b59-88ab-4673-9d0c-e479b9138b30-kube-api-access-6jq2j\") pod \"multus-additional-cni-plugins-k7kbl\" (UID: \"bf732b59-88ab-4673-9d0c-e479b9138b30\") " pod="openshift-multus/multus-additional-cni-plugins-k7kbl" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.969837 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgcnl\" (UniqueName: \"kubernetes.io/projected/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-kube-api-access-hgcnl\") pod \"ovnkube-node-d5s24\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.978221 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:01Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.980024 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.980651 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.980926 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.981125 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:01 crc kubenswrapper[4689]: I1210 12:16:01.981287 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:01Z","lastTransitionTime":"2025-12-10T12:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.005748 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.013081 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" event={"ID":"a41ebdcd-910f-4669-992d-296e1a92162b","Type":"ContainerStarted","Data":"21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c"} Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.013450 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" event={"ID":"a41ebdcd-910f-4669-992d-296e1a92162b","Type":"ContainerStarted","Data":"5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505"} Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.014110 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" event={"ID":"a41ebdcd-910f-4669-992d-296e1a92162b","Type":"ContainerStarted","Data":"17bdf391e2db11d27e0af2facbcb042d89e5a8a822125e9b4fb5f9907410589e"} Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.014646 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-77s7q" event={"ID":"9f25cd73-d88f-4d52-93b6-483589dc4ac4","Type":"ContainerStarted","Data":"c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad"} Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.014756 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-77s7q" event={"ID":"9f25cd73-d88f-4d52-93b6-483589dc4ac4","Type":"ContainerStarted","Data":"23f30c1ddedfa30fafabcfaced4b5e8ec3f92cd045345c43a4e4197ab523e8b4"} Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.023404 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.040616 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.058612 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.066780 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-r6wmt" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.072305 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: W1210 12:16:02.076460 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3713b4f8_2ee3_4078_859a_dca17076f9a6.slice/crio-e39ece10c0ad20c04164b0ee16f8563925da7f1ce06454841b9100e241ac0f5a WatchSource:0}: Error finding container e39ece10c0ad20c04164b0ee16f8563925da7f1ce06454841b9100e241ac0f5a: Status 404 returned error can't find the container with id e39ece10c0ad20c04164b0ee16f8563925da7f1ce06454841b9100e241ac0f5a Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.085629 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.085668 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.085678 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.085694 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.085708 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:02Z","lastTransitionTime":"2025-12-10T12:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.087276 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.106586 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.115141 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:02 crc kubenswrapper[4689]: W1210 12:16:02.127749 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb434fe8e_c4c2_4979_a9b6_8561523c2d9d.slice/crio-7af4fd28dd28c02132a7dad257583be4949813113eee241d6cd3e57007ab8ee7 WatchSource:0}: Error finding container 7af4fd28dd28c02132a7dad257583be4949813113eee241d6cd3e57007ab8ee7: Status 404 returned error can't find the container with id 7af4fd28dd28c02132a7dad257583be4949813113eee241d6cd3e57007ab8ee7 Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.136790 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.151461 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.165620 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.176871 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.188202 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.188528 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.188571 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.188584 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.188604 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.188619 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:02Z","lastTransitionTime":"2025-12-10T12:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.201360 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.212516 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.224022 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.243160 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.257124 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.269546 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.281441 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.290468 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.290510 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.290526 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.290541 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.290553 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:02Z","lastTransitionTime":"2025-12-10T12:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.291928 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.306748 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.323638 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.337415 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.349812 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.362271 4689 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.362564 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/pods/node-resolver-77s7q/status\": read tcp 38.102.83.163:49418->38.102.83.163:6443: use of closed network connection" Dec 10 12:16:02 crc kubenswrapper[4689]: W1210 12:16:02.362607 4689 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": Unexpected watch close - watch lasted less than a second and no items received Dec 10 12:16:02 crc kubenswrapper[4689]: W1210 12:16:02.362649 4689 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 10 12:16:02 crc kubenswrapper[4689]: W1210 12:16:02.362666 4689 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"proxy-tls": Unexpected watch close - watch lasted less than a second and no items received Dec 10 12:16:02 crc kubenswrapper[4689]: W1210 12:16:02.362691 4689 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"cni-copy-resources": Unexpected watch close - watch lasted less than a second and no items received Dec 10 12:16:02 crc kubenswrapper[4689]: W1210 12:16:02.362657 4689 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": Unexpected watch close - watch lasted less than a second and no items received Dec 10 12:16:02 crc kubenswrapper[4689]: W1210 12:16:02.362727 4689 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 10 12:16:02 crc kubenswrapper[4689]: W1210 12:16:02.362751 4689 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 10 12:16:02 crc kubenswrapper[4689]: W1210 12:16:02.362779 4689 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"default-dockercfg-2q5b6": Unexpected watch close - watch lasted less than a second and no items received Dec 10 12:16:02 crc kubenswrapper[4689]: W1210 12:16:02.362799 4689 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": Unexpected watch close - watch lasted less than a second and no items received Dec 10 12:16:02 crc kubenswrapper[4689]: W1210 12:16:02.362812 4689 reflector.go:484] object-"openshift-ovn-kubernetes"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Dec 10 12:16:02 crc kubenswrapper[4689]: W1210 12:16:02.362821 4689 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"multus-daemon-config": Unexpected watch close - watch lasted less than a second and no items received Dec 10 12:16:02 crc kubenswrapper[4689]: W1210 12:16:02.362846 4689 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": Unexpected watch close - watch lasted less than a second and no items received Dec 10 12:16:02 crc kubenswrapper[4689]: W1210 12:16:02.363119 4689 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-rbac-proxy": Unexpected watch close - watch lasted less than a second and no items received Dec 10 12:16:02 crc kubenswrapper[4689]: W1210 12:16:02.363133 4689 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 10 12:16:02 crc kubenswrapper[4689]: W1210 12:16:02.363155 4689 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 10 12:16:02 crc kubenswrapper[4689]: W1210 12:16:02.363165 4689 reflector.go:484] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 10 12:16:02 crc kubenswrapper[4689]: W1210 12:16:02.363182 4689 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 10 12:16:02 crc kubenswrapper[4689]: W1210 12:16:02.363268 4689 reflector.go:484] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 10 12:16:02 crc kubenswrapper[4689]: W1210 12:16:02.363161 4689 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Dec 10 12:16:02 crc kubenswrapper[4689]: W1210 12:16:02.363289 4689 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": Unexpected watch close - watch lasted less than a second and no items received Dec 10 12:16:02 crc kubenswrapper[4689]: W1210 12:16:02.363304 4689 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-config": Unexpected watch close - watch lasted less than a second and no items received Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.393936 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.394006 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.394017 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.394055 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.394065 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:02Z","lastTransitionTime":"2025-12-10T12:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.495760 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.495796 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.495806 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.495822 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.495835 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:02Z","lastTransitionTime":"2025-12-10T12:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.498005 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:02 crc kubenswrapper[4689]: E1210 12:16:02.498144 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.512506 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.527459 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.542795 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.564827 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.578440 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.592563 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.598256 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.598342 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.598355 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.598370 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.598380 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:02Z","lastTransitionTime":"2025-12-10T12:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.605123 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.618652 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.631672 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.645650 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.686909 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.700275 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.700326 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.700344 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.700365 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.700384 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:02Z","lastTransitionTime":"2025-12-10T12:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.723810 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.769732 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.802923 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.803018 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.803030 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.803046 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.803057 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:02Z","lastTransitionTime":"2025-12-10T12:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.905674 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.905722 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.905738 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.905762 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.905778 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:02Z","lastTransitionTime":"2025-12-10T12:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:02 crc kubenswrapper[4689]: E1210 12:16:02.946432 4689 configmap.go:193] Couldn't get configMap openshift-multus/default-cni-sysctl-allowlist: failed to sync configmap cache: timed out waiting for the condition Dec 10 12:16:02 crc kubenswrapper[4689]: E1210 12:16:02.946513 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bf732b59-88ab-4673-9d0c-e479b9138b30-cni-sysctl-allowlist podName:bf732b59-88ab-4673-9d0c-e479b9138b30 nodeName:}" failed. No retries permitted until 2025-12-10 12:16:03.446494208 +0000 UTC m=+31.234575346 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/bf732b59-88ab-4673-9d0c-e479b9138b30-cni-sysctl-allowlist") pod "multus-additional-cni-plugins-k7kbl" (UID: "bf732b59-88ab-4673-9d0c-e479b9138b30") : failed to sync configmap cache: timed out waiting for the condition Dec 10 12:16:02 crc kubenswrapper[4689]: I1210 12:16:02.961933 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.008157 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.008206 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.008216 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.008237 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.008246 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:03Z","lastTransitionTime":"2025-12-10T12:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.017777 4689 generic.go:334] "Generic (PLEG): container finished" podID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerID="b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137" exitCode=0 Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.017844 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" event={"ID":"b434fe8e-c4c2-4979-a9b6-8561523c2d9d","Type":"ContainerDied","Data":"b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137"} Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.018048 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" event={"ID":"b434fe8e-c4c2-4979-a9b6-8561523c2d9d","Type":"ContainerStarted","Data":"7af4fd28dd28c02132a7dad257583be4949813113eee241d6cd3e57007ab8ee7"} Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.020127 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r6wmt" event={"ID":"3713b4f8-2ee3-4078-859a-dca17076f9a6","Type":"ContainerStarted","Data":"41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0"} Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.020166 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r6wmt" event={"ID":"3713b4f8-2ee3-4078-859a-dca17076f9a6","Type":"ContainerStarted","Data":"e39ece10c0ad20c04164b0ee16f8563925da7f1ce06454841b9100e241ac0f5a"} Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.029510 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:03Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.041248 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:03Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.056294 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:03Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.068460 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:03Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.079782 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:03Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.088721 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:03Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.100897 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:03Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.110197 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.110227 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.110235 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.110248 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.110257 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:03Z","lastTransitionTime":"2025-12-10T12:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.118280 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:03Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.145574 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:03Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.185710 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:03Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.197925 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.212633 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.212676 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.212691 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.212710 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.212729 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:03Z","lastTransitionTime":"2025-12-10T12:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.237836 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.267574 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:03Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.278335 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.314776 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.315082 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.315092 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.315106 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.315115 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:03Z","lastTransitionTime":"2025-12-10T12:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.318320 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.357663 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.380411 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:03Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.397916 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.399244 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.417605 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.417849 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.417881 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.417889 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.417903 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.417912 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:03Z","lastTransitionTime":"2025-12-10T12:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.457528 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.458604 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/bf732b59-88ab-4673-9d0c-e479b9138b30-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-k7kbl\" (UID: \"bf732b59-88ab-4673-9d0c-e479b9138b30\") " pod="openshift-multus/multus-additional-cni-plugins-k7kbl" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.459111 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/bf732b59-88ab-4673-9d0c-e479b9138b30-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-k7kbl\" (UID: \"bf732b59-88ab-4673-9d0c-e479b9138b30\") " pod="openshift-multus/multus-additional-cni-plugins-k7kbl" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.477431 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.497720 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:03 crc kubenswrapper[4689]: E1210 12:16:03.497826 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.497720 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:03 crc kubenswrapper[4689]: E1210 12:16:03.497882 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.498436 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.520403 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.520466 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.520486 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.520512 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.520530 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:03Z","lastTransitionTime":"2025-12-10T12:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.529561 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:03Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.566916 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:03Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.597421 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.601862 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" Dec 10 12:16:03 crc kubenswrapper[4689]: W1210 12:16:03.611561 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf732b59_88ab_4673_9d0c_e479b9138b30.slice/crio-76623736ad4e46baca7414eb807868b3a1049571c55da297d185ea3870e21e8a WatchSource:0}: Error finding container 76623736ad4e46baca7414eb807868b3a1049571c55da297d185ea3870e21e8a: Status 404 returned error can't find the container with id 76623736ad4e46baca7414eb807868b3a1049571c55da297d185ea3870e21e8a Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.622129 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.622182 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.622198 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.622216 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.622230 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:03Z","lastTransitionTime":"2025-12-10T12:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.637941 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:03Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.638533 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.657375 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.676945 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.716905 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.724759 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.724803 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.724814 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.724834 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.724848 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:03Z","lastTransitionTime":"2025-12-10T12:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.737711 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.768003 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:03Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.777730 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.797758 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.827534 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.827579 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.827591 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.827608 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.827624 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:03Z","lastTransitionTime":"2025-12-10T12:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.850747 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:03Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.887936 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:03Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.925852 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:03Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.929623 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.929673 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.929684 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.929699 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.929710 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:03Z","lastTransitionTime":"2025-12-10T12:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.937518 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 10 12:16:03 crc kubenswrapper[4689]: I1210 12:16:03.977671 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.009129 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:04Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.027612 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" event={"ID":"b434fe8e-c4c2-4979-a9b6-8561523c2d9d","Type":"ContainerStarted","Data":"6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301"} Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.027652 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" event={"ID":"b434fe8e-c4c2-4979-a9b6-8561523c2d9d","Type":"ContainerStarted","Data":"4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07"} Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.027663 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" event={"ID":"b434fe8e-c4c2-4979-a9b6-8561523c2d9d","Type":"ContainerStarted","Data":"69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21"} Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.027672 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" event={"ID":"b434fe8e-c4c2-4979-a9b6-8561523c2d9d","Type":"ContainerStarted","Data":"f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a"} Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.027682 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" event={"ID":"b434fe8e-c4c2-4979-a9b6-8561523c2d9d","Type":"ContainerStarted","Data":"eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9"} Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.027698 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" event={"ID":"b434fe8e-c4c2-4979-a9b6-8561523c2d9d","Type":"ContainerStarted","Data":"17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723"} Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.031104 4689 generic.go:334] "Generic (PLEG): container finished" podID="bf732b59-88ab-4673-9d0c-e479b9138b30" containerID="b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244" exitCode=0 Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.031723 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" event={"ID":"bf732b59-88ab-4673-9d0c-e479b9138b30","Type":"ContainerDied","Data":"b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244"} Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.031755 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" event={"ID":"bf732b59-88ab-4673-9d0c-e479b9138b30","Type":"ContainerStarted","Data":"76623736ad4e46baca7414eb807868b3a1049571c55da297d185ea3870e21e8a"} Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.033002 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.033032 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.033041 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.033052 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.033061 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:04Z","lastTransitionTime":"2025-12-10T12:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.045744 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:04Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.085278 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:04Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.124889 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:04Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.136311 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.136356 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.136370 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.136388 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.136402 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:04Z","lastTransitionTime":"2025-12-10T12:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.165245 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:04Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.206990 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:04Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.239474 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.239518 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.239529 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.239546 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.239558 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:04Z","lastTransitionTime":"2025-12-10T12:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.244766 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:04Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.284713 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:04Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.325617 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:04Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.342764 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.342828 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.342840 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.342856 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.342866 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:04Z","lastTransitionTime":"2025-12-10T12:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.368390 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:04Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.383643 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-dk9hq"] Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.384057 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-dk9hq" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.404167 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:04Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.417699 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.436799 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.445293 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.445315 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.445322 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.445335 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.445345 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:04Z","lastTransitionTime":"2025-12-10T12:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.457272 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.468939 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4aff4b91-a4af-46bd-93ba-a7c1356fc498-serviceca\") pod \"node-ca-dk9hq\" (UID: \"4aff4b91-a4af-46bd-93ba-a7c1356fc498\") " pod="openshift-image-registry/node-ca-dk9hq" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.469035 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4aff4b91-a4af-46bd-93ba-a7c1356fc498-host\") pod \"node-ca-dk9hq\" (UID: \"4aff4b91-a4af-46bd-93ba-a7c1356fc498\") " pod="openshift-image-registry/node-ca-dk9hq" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.469069 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c2rw\" (UniqueName: \"kubernetes.io/projected/4aff4b91-a4af-46bd-93ba-a7c1356fc498-kube-api-access-5c2rw\") pod \"node-ca-dk9hq\" (UID: \"4aff4b91-a4af-46bd-93ba-a7c1356fc498\") " pod="openshift-image-registry/node-ca-dk9hq" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.476959 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.499884 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:04 crc kubenswrapper[4689]: E1210 12:16:04.500067 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.527361 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:04Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.548082 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.548111 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.548120 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.548132 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.548142 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:04Z","lastTransitionTime":"2025-12-10T12:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.570255 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4aff4b91-a4af-46bd-93ba-a7c1356fc498-host\") pod \"node-ca-dk9hq\" (UID: \"4aff4b91-a4af-46bd-93ba-a7c1356fc498\") " pod="openshift-image-registry/node-ca-dk9hq" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.570295 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c2rw\" (UniqueName: \"kubernetes.io/projected/4aff4b91-a4af-46bd-93ba-a7c1356fc498-kube-api-access-5c2rw\") pod \"node-ca-dk9hq\" (UID: \"4aff4b91-a4af-46bd-93ba-a7c1356fc498\") " pod="openshift-image-registry/node-ca-dk9hq" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.570312 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4aff4b91-a4af-46bd-93ba-a7c1356fc498-serviceca\") pod \"node-ca-dk9hq\" (UID: \"4aff4b91-a4af-46bd-93ba-a7c1356fc498\") " pod="openshift-image-registry/node-ca-dk9hq" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.571075 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4aff4b91-a4af-46bd-93ba-a7c1356fc498-serviceca\") pod \"node-ca-dk9hq\" (UID: \"4aff4b91-a4af-46bd-93ba-a7c1356fc498\") " pod="openshift-image-registry/node-ca-dk9hq" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.571121 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4aff4b91-a4af-46bd-93ba-a7c1356fc498-host\") pod \"node-ca-dk9hq\" (UID: \"4aff4b91-a4af-46bd-93ba-a7c1356fc498\") " pod="openshift-image-registry/node-ca-dk9hq" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.571430 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:04Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.597194 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c2rw\" (UniqueName: \"kubernetes.io/projected/4aff4b91-a4af-46bd-93ba-a7c1356fc498-kube-api-access-5c2rw\") pod \"node-ca-dk9hq\" (UID: \"4aff4b91-a4af-46bd-93ba-a7c1356fc498\") " pod="openshift-image-registry/node-ca-dk9hq" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.626630 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:04Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.649820 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.649858 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.649869 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.649886 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.649899 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:04Z","lastTransitionTime":"2025-12-10T12:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.667771 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:04Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.696751 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-dk9hq" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.707429 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:04Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:04 crc kubenswrapper[4689]: W1210 12:16:04.715004 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4aff4b91_a4af_46bd_93ba_a7c1356fc498.slice/crio-85f98a8d3bec2c6f39ee882b7cbcc82c7e22f7c358ee4cbee90b0df4255af9d4 WatchSource:0}: Error finding container 85f98a8d3bec2c6f39ee882b7cbcc82c7e22f7c358ee4cbee90b0df4255af9d4: Status 404 returned error can't find the container with id 85f98a8d3bec2c6f39ee882b7cbcc82c7e22f7c358ee4cbee90b0df4255af9d4 Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.750663 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:04Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.752371 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.752413 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.752427 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.752443 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.752455 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:04Z","lastTransitionTime":"2025-12-10T12:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.794219 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:04Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.825519 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:04Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.854117 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.854153 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.854165 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.854181 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.854192 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:04Z","lastTransitionTime":"2025-12-10T12:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.866049 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:04Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.906992 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:04Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.946423 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:04Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.956106 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.956128 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.956137 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.956149 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.956159 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:04Z","lastTransitionTime":"2025-12-10T12:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:04 crc kubenswrapper[4689]: I1210 12:16:04.987699 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:04Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.022625 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:05Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.036773 4689 generic.go:334] "Generic (PLEG): container finished" podID="bf732b59-88ab-4673-9d0c-e479b9138b30" containerID="56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a" exitCode=0 Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.036821 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" event={"ID":"bf732b59-88ab-4673-9d0c-e479b9138b30","Type":"ContainerDied","Data":"56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a"} Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.039365 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-dk9hq" event={"ID":"4aff4b91-a4af-46bd-93ba-a7c1356fc498","Type":"ContainerStarted","Data":"85f98a8d3bec2c6f39ee882b7cbcc82c7e22f7c358ee4cbee90b0df4255af9d4"} Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.058339 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.058370 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.058380 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.058393 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.058401 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:05Z","lastTransitionTime":"2025-12-10T12:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.066683 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:05Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.111519 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:05Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.145004 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:05Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.160678 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.160723 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.160733 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.160754 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.160765 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:05Z","lastTransitionTime":"2025-12-10T12:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.176334 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:16:05 crc kubenswrapper[4689]: E1210 12:16:05.176509 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:16:13.17647976 +0000 UTC m=+40.964560938 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.176671 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.176719 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:05 crc kubenswrapper[4689]: E1210 12:16:05.176804 4689 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 12:16:05 crc kubenswrapper[4689]: E1210 12:16:05.176845 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 12:16:13.176835228 +0000 UTC m=+40.964916486 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 12:16:05 crc kubenswrapper[4689]: E1210 12:16:05.177372 4689 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 12:16:05 crc kubenswrapper[4689]: E1210 12:16:05.177479 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 12:16:13.177449443 +0000 UTC m=+40.965530641 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.187586 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:05Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.226718 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:05Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.264081 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.264138 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.264154 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.264176 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.264192 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:05Z","lastTransitionTime":"2025-12-10T12:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.277636 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.277711 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:05 crc kubenswrapper[4689]: E1210 12:16:05.277867 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 12:16:05 crc kubenswrapper[4689]: E1210 12:16:05.277893 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 12:16:05 crc kubenswrapper[4689]: E1210 12:16:05.277909 4689 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 12:16:05 crc kubenswrapper[4689]: E1210 12:16:05.277997 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 12:16:13.277949148 +0000 UTC m=+41.066030306 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 12:16:05 crc kubenswrapper[4689]: E1210 12:16:05.278405 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 12:16:05 crc kubenswrapper[4689]: E1210 12:16:05.278432 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 12:16:05 crc kubenswrapper[4689]: E1210 12:16:05.278447 4689 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 12:16:05 crc kubenswrapper[4689]: E1210 12:16:05.278488 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 12:16:13.278474459 +0000 UTC m=+41.066555617 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.288019 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:05Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.312301 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:05Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.360187 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:05Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.365842 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.365869 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.365877 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.365890 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.365900 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:05Z","lastTransitionTime":"2025-12-10T12:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.385861 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:05Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.424463 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:05Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.471726 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.471770 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.471788 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.471831 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.471594 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:05Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.471868 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:05Z","lastTransitionTime":"2025-12-10T12:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.497721 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.497764 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:05 crc kubenswrapper[4689]: E1210 12:16:05.497852 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:16:05 crc kubenswrapper[4689]: E1210 12:16:05.497950 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.507004 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:05Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.545036 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:05Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.574961 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.575026 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.575038 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.575056 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.575067 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:05Z","lastTransitionTime":"2025-12-10T12:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.585080 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:05Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.630951 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:05Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.666405 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:05Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.677528 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.677573 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.677587 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.677605 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.677619 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:05Z","lastTransitionTime":"2025-12-10T12:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.704888 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:05Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.746203 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:05Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.780031 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.780073 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.780083 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.780102 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.780115 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:05Z","lastTransitionTime":"2025-12-10T12:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.788114 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:05Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.825214 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:05Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.866517 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:05Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.881918 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.881956 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.881992 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.882011 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.882021 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:05Z","lastTransitionTime":"2025-12-10T12:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.919739 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:05Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.949454 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:05Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.984301 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.984333 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.984343 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.984357 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.984367 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:05Z","lastTransitionTime":"2025-12-10T12:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:05 crc kubenswrapper[4689]: I1210 12:16:05.985401 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:05Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.044831 4689 generic.go:334] "Generic (PLEG): container finished" podID="bf732b59-88ab-4673-9d0c-e479b9138b30" containerID="b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694" exitCode=0 Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.044920 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" event={"ID":"bf732b59-88ab-4673-9d0c-e479b9138b30","Type":"ContainerDied","Data":"b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694"} Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.049744 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" event={"ID":"b434fe8e-c4c2-4979-a9b6-8561523c2d9d","Type":"ContainerStarted","Data":"e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c"} Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.050993 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-dk9hq" event={"ID":"4aff4b91-a4af-46bd-93ba-a7c1356fc498","Type":"ContainerStarted","Data":"e400561fd4aa4adc6f0733a71b55aff1a6163a71bf34147aeeb41f95001d8222"} Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.067853 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:06Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.087951 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.088012 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.088024 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.088053 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.088065 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:06Z","lastTransitionTime":"2025-12-10T12:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.088035 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:06Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.110317 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:06Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.147484 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:06Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.184839 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:06Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.190275 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.190311 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.190319 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.190333 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.190344 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:06Z","lastTransitionTime":"2025-12-10T12:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.231636 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:06Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.273604 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:06Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.292334 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.292365 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.292373 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.292387 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.292397 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:06Z","lastTransitionTime":"2025-12-10T12:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.309049 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:06Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.348271 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:06Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.385587 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:06Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.394664 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.394722 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.394740 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.394767 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.394790 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:06Z","lastTransitionTime":"2025-12-10T12:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.428056 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:06Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.473215 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:06Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.497269 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.497281 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:06 crc kubenswrapper[4689]: E1210 12:16:06.497432 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.497469 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.497497 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.497528 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.497552 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:06Z","lastTransitionTime":"2025-12-10T12:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.512908 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:06Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.547377 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:06Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.600164 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.600208 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.600219 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.600235 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.600249 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:06Z","lastTransitionTime":"2025-12-10T12:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.614988 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:06Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.630047 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:06Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.669327 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:06Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.702855 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.702892 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.702901 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.702916 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.702925 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:06Z","lastTransitionTime":"2025-12-10T12:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.706718 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:06Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.743841 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:06Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.786537 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:06Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.804309 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.804344 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.804352 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.804365 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.804373 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:06Z","lastTransitionTime":"2025-12-10T12:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.824794 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:06Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.864984 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:06Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.907960 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.908086 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.908114 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.908145 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.908166 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:06Z","lastTransitionTime":"2025-12-10T12:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.915619 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:06Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:06 crc kubenswrapper[4689]: I1210 12:16:06.947941 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:06Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.011755 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.011807 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.011820 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.011840 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.011853 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:07Z","lastTransitionTime":"2025-12-10T12:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.025057 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:07Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.054640 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:07Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.056866 4689 generic.go:334] "Generic (PLEG): container finished" podID="bf732b59-88ab-4673-9d0c-e479b9138b30" containerID="92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f" exitCode=0 Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.056941 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" event={"ID":"bf732b59-88ab-4673-9d0c-e479b9138b30","Type":"ContainerDied","Data":"92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f"} Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.076469 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:07Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.104533 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400561fd4aa4adc6f0733a71b55aff1a6163a71bf34147aeeb41f95001d8222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:07Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.114600 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.114643 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.114655 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.114675 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.114687 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:07Z","lastTransitionTime":"2025-12-10T12:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.148650 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:07Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.183284 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:07Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.217655 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.217700 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.217710 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.217731 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.217745 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:07Z","lastTransitionTime":"2025-12-10T12:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.227166 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:07Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.268900 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:07Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.305537 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:07Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.319639 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.319682 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.319693 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.319711 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.319723 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:07Z","lastTransitionTime":"2025-12-10T12:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.345084 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400561fd4aa4adc6f0733a71b55aff1a6163a71bf34147aeeb41f95001d8222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:07Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.395466 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:07Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.423008 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.423056 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.423074 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.423098 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.423114 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:07Z","lastTransitionTime":"2025-12-10T12:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.431692 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:07Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.470694 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:07Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.497737 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.497772 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:07 crc kubenswrapper[4689]: E1210 12:16:07.497912 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:16:07 crc kubenswrapper[4689]: E1210 12:16:07.498031 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.515133 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:07Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.526341 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.526399 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.526420 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.526453 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.526476 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:07Z","lastTransitionTime":"2025-12-10T12:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.551079 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:07Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.603600 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:07Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.629574 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.629634 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.629649 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.629671 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.629689 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:07Z","lastTransitionTime":"2025-12-10T12:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.636885 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:07Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.668811 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:07Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.732322 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.732392 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.732415 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.732449 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.732474 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:07Z","lastTransitionTime":"2025-12-10T12:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.835693 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.835751 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.835768 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.835792 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.835808 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:07Z","lastTransitionTime":"2025-12-10T12:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.938894 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.938951 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.939001 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.939030 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:07 crc kubenswrapper[4689]: I1210 12:16:07.939049 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:07Z","lastTransitionTime":"2025-12-10T12:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.041212 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.041254 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.041337 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.041356 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.041369 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:08Z","lastTransitionTime":"2025-12-10T12:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.064689 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" event={"ID":"bf732b59-88ab-4673-9d0c-e479b9138b30","Type":"ContainerStarted","Data":"3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130"} Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.081818 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:08Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.100097 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:08Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.115593 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:08Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.128060 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400561fd4aa4adc6f0733a71b55aff1a6163a71bf34147aeeb41f95001d8222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:08Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.144052 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.144102 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.144115 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.144134 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.144146 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:08Z","lastTransitionTime":"2025-12-10T12:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.151414 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:08Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.166684 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:08Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.183075 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:08Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.198157 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:08Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.217062 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:08Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.236785 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:08Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.248183 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.248247 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.248269 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.248297 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.248317 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:08Z","lastTransitionTime":"2025-12-10T12:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.259370 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:08Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.281185 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:08Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.299630 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:08Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.311388 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:08Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.350803 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.351111 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.351248 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.351342 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.351423 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:08Z","lastTransitionTime":"2025-12-10T12:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.455053 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.455417 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.455583 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.456005 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.456192 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:08Z","lastTransitionTime":"2025-12-10T12:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.497401 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:08 crc kubenswrapper[4689]: E1210 12:16:08.497606 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.559454 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.559520 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.560339 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.560387 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.560414 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:08Z","lastTransitionTime":"2025-12-10T12:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.664258 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.664306 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.664317 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.664336 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.664352 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:08Z","lastTransitionTime":"2025-12-10T12:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.766764 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.766807 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.766816 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.766830 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.766838 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:08Z","lastTransitionTime":"2025-12-10T12:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.869728 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.869772 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.869783 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.869802 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.869815 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:08Z","lastTransitionTime":"2025-12-10T12:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.973349 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.973420 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.973443 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.973473 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:08 crc kubenswrapper[4689]: I1210 12:16:08.973493 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:08Z","lastTransitionTime":"2025-12-10T12:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.073062 4689 generic.go:334] "Generic (PLEG): container finished" podID="bf732b59-88ab-4673-9d0c-e479b9138b30" containerID="3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130" exitCode=0 Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.073137 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" event={"ID":"bf732b59-88ab-4673-9d0c-e479b9138b30","Type":"ContainerDied","Data":"3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130"} Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.076631 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.076685 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.076707 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.076735 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.076758 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:09Z","lastTransitionTime":"2025-12-10T12:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.082921 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" event={"ID":"b434fe8e-c4c2-4979-a9b6-8561523c2d9d","Type":"ContainerStarted","Data":"5540d337c2977642446fd1ebb59fb99e1486588b986e1fb9bac6f0b95bebc05d"} Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.084020 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.084043 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.084055 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.091942 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:09Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.117203 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:09Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.119920 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.123337 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.133439 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:09Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.148225 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400561fd4aa4adc6f0733a71b55aff1a6163a71bf34147aeeb41f95001d8222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:09Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.171603 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:09Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.178862 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.178909 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.178950 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.178995 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.179009 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:09Z","lastTransitionTime":"2025-12-10T12:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.192418 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:09Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.204496 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:09Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.215370 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:09Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.229074 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:09Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.248771 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:09Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.260058 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:09Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.271862 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:09Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.281400 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:09Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.282954 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.283006 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.283016 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.283029 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.283038 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:09Z","lastTransitionTime":"2025-12-10T12:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.291814 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:09Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.302963 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:09Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.313038 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:09Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.324199 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:09Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.333225 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400561fd4aa4adc6f0733a71b55aff1a6163a71bf34147aeeb41f95001d8222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:09Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.345909 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:09Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.357209 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:09Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.372316 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:09Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.381525 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:09Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.385298 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.385330 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.385339 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.385355 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.385365 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:09Z","lastTransitionTime":"2025-12-10T12:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.395109 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:09Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.420158 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5540d337c2977642446fd1ebb59fb99e1486588b986e1fb9bac6f0b95bebc05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:09Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.433175 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:09Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.444301 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:09Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.460420 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:09Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.471775 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:09Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.487674 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.487741 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.487753 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.487777 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.487791 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:09Z","lastTransitionTime":"2025-12-10T12:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.498042 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.498043 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:09 crc kubenswrapper[4689]: E1210 12:16:09.498307 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:16:09 crc kubenswrapper[4689]: E1210 12:16:09.498429 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.593205 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.593255 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.593273 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.593297 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.593319 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:09Z","lastTransitionTime":"2025-12-10T12:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.696841 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.696902 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.696923 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.696951 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.697002 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:09Z","lastTransitionTime":"2025-12-10T12:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.800073 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.800119 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.800133 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.800151 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.800165 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:09Z","lastTransitionTime":"2025-12-10T12:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.903364 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.903436 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.903454 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.903478 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:09 crc kubenswrapper[4689]: I1210 12:16:09.903495 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:09Z","lastTransitionTime":"2025-12-10T12:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.007014 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.007071 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.007091 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.007121 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.007140 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:10Z","lastTransitionTime":"2025-12-10T12:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.090704 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" event={"ID":"bf732b59-88ab-4673-9d0c-e479b9138b30","Type":"ContainerStarted","Data":"ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589"} Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.111212 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.111272 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.111290 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.111315 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.111335 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:10Z","lastTransitionTime":"2025-12-10T12:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.112401 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:10Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.126066 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:10Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.146344 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:10Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.172606 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:10Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.190897 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:10Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.205378 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400561fd4aa4adc6f0733a71b55aff1a6163a71bf34147aeeb41f95001d8222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:10Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.213746 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.213773 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.213782 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.213794 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.213804 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:10Z","lastTransitionTime":"2025-12-10T12:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.215921 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:10Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.225945 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:10Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.236849 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:10Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.247558 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:10Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.262046 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:10Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.279531 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5540d337c2977642446fd1ebb59fb99e1486588b986e1fb9bac6f0b95bebc05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:10Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.292450 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:10Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.306468 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:10Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.316253 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.316278 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.316286 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.316301 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.316310 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:10Z","lastTransitionTime":"2025-12-10T12:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.418585 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.418623 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.418635 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.418649 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.418660 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:10Z","lastTransitionTime":"2025-12-10T12:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.498284 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:10 crc kubenswrapper[4689]: E1210 12:16:10.498645 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.521196 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.521270 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.521296 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.521321 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.521342 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:10Z","lastTransitionTime":"2025-12-10T12:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.623782 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.623811 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.623823 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.623840 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.623851 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:10Z","lastTransitionTime":"2025-12-10T12:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.727313 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.727361 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.727377 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.727399 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.727418 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:10Z","lastTransitionTime":"2025-12-10T12:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.831519 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.831568 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.831585 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.831609 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.831625 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:10Z","lastTransitionTime":"2025-12-10T12:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.933693 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.933721 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.933729 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.933741 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:10 crc kubenswrapper[4689]: I1210 12:16:10.933750 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:10Z","lastTransitionTime":"2025-12-10T12:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.036488 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.036546 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.036564 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.036590 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.036607 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:11Z","lastTransitionTime":"2025-12-10T12:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.097180 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d5s24_b434fe8e-c4c2-4979-a9b6-8561523c2d9d/ovnkube-controller/0.log" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.101078 4689 generic.go:334] "Generic (PLEG): container finished" podID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerID="5540d337c2977642446fd1ebb59fb99e1486588b986e1fb9bac6f0b95bebc05d" exitCode=1 Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.101162 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" event={"ID":"b434fe8e-c4c2-4979-a9b6-8561523c2d9d","Type":"ContainerDied","Data":"5540d337c2977642446fd1ebb59fb99e1486588b986e1fb9bac6f0b95bebc05d"} Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.102361 4689 scope.go:117] "RemoveContainer" containerID="5540d337c2977642446fd1ebb59fb99e1486588b986e1fb9bac6f0b95bebc05d" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.108077 4689 generic.go:334] "Generic (PLEG): container finished" podID="bf732b59-88ab-4673-9d0c-e479b9138b30" containerID="ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589" exitCode=0 Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.108120 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" event={"ID":"bf732b59-88ab-4673-9d0c-e479b9138b30","Type":"ContainerDied","Data":"ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589"} Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.121942 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.138495 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.148365 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.148431 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.148449 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.148475 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.148496 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:11Z","lastTransitionTime":"2025-12-10T12:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.160222 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.173045 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.194275 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.207019 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400561fd4aa4adc6f0733a71b55aff1a6163a71bf34147aeeb41f95001d8222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.220002 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.231201 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.241634 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.251780 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.252174 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.252184 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.252196 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.252206 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:11Z","lastTransitionTime":"2025-12-10T12:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.253360 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.269809 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5540d337c2977642446fd1ebb59fb99e1486588b986e1fb9bac6f0b95bebc05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5540d337c2977642446fd1ebb59fb99e1486588b986e1fb9bac6f0b95bebc05d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:10Z\\\",\\\"message\\\":\\\"ient/informers/externalversions/factory.go:141\\\\nI1210 12:16:10.769217 5964 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 12:16:10.769257 5964 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 12:16:10.769285 5964 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 12:16:10.769302 5964 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 12:16:10.769383 5964 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1210 12:16:10.769408 5964 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1210 12:16:10.769451 5964 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1210 12:16:10.769334 5964 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 12:16:10.769807 5964 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 12:16:10.769820 5964 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 12:16:10.769834 5964 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1210 12:16:10.769853 5964 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 12:16:10.769902 5964 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1210 12:16:10.770163 5964 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.284529 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.295367 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.306189 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.318491 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.335195 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.344842 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.354762 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.354797 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.354809 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.354827 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.354840 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:11Z","lastTransitionTime":"2025-12-10T12:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.361777 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.379479 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.396326 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.415278 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.426332 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400561fd4aa4adc6f0733a71b55aff1a6163a71bf34147aeeb41f95001d8222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.438138 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.452053 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.456731 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.456780 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.456800 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.456825 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.456843 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:11Z","lastTransitionTime":"2025-12-10T12:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.463126 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.473723 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.488824 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.497334 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:11 crc kubenswrapper[4689]: E1210 12:16:11.497487 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.497919 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:11 crc kubenswrapper[4689]: E1210 12:16:11.497990 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.508898 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5540d337c2977642446fd1ebb59fb99e1486588b986e1fb9bac6f0b95bebc05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5540d337c2977642446fd1ebb59fb99e1486588b986e1fb9bac6f0b95bebc05d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:10Z\\\",\\\"message\\\":\\\"ient/informers/externalversions/factory.go:141\\\\nI1210 12:16:10.769217 5964 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 12:16:10.769257 5964 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 12:16:10.769285 5964 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 12:16:10.769302 5964 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 12:16:10.769383 5964 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1210 12:16:10.769408 5964 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1210 12:16:10.769451 5964 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1210 12:16:10.769334 5964 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 12:16:10.769807 5964 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 12:16:10.769820 5964 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 12:16:10.769834 5964 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1210 12:16:10.769853 5964 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 12:16:10.769902 5964 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1210 12:16:10.770163 5964 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.558811 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.558849 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.558860 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.558876 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.558891 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:11Z","lastTransitionTime":"2025-12-10T12:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.661282 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.661342 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.661355 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.661374 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.661387 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:11Z","lastTransitionTime":"2025-12-10T12:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.764140 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.764216 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.764244 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.764270 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.764288 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:11Z","lastTransitionTime":"2025-12-10T12:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.866366 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.866397 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.866407 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.866420 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.866429 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:11Z","lastTransitionTime":"2025-12-10T12:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.871856 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.892458 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.909194 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.925020 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.925053 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.925060 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.925072 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.925081 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:11Z","lastTransitionTime":"2025-12-10T12:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.928145 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: E1210 12:16:11.938794 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.939578 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.942521 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.942550 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.942558 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.942571 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.942580 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:11Z","lastTransitionTime":"2025-12-10T12:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.951507 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: E1210 12:16:11.953010 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.956775 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.956807 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.956817 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.956833 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.956846 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:11Z","lastTransitionTime":"2025-12-10T12:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.962840 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400561fd4aa4adc6f0733a71b55aff1a6163a71bf34147aeeb41f95001d8222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: E1210 12:16:11.971946 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.995583 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:11Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.996843 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.996879 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.996892 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.996909 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:11 crc kubenswrapper[4689]: I1210 12:16:11.996920 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:11Z","lastTransitionTime":"2025-12-10T12:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.018384 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: E1210 12:16:12.034737 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.039711 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.039754 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.039766 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.039783 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.039795 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:12Z","lastTransitionTime":"2025-12-10T12:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.043492 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: E1210 12:16:12.055194 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: E1210 12:16:12.055352 4689 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.057427 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.057461 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.057474 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.057496 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.057509 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:12Z","lastTransitionTime":"2025-12-10T12:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.060153 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.077775 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.096960 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5540d337c2977642446fd1ebb59fb99e1486588b986e1fb9bac6f0b95bebc05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5540d337c2977642446fd1ebb59fb99e1486588b986e1fb9bac6f0b95bebc05d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:10Z\\\",\\\"message\\\":\\\"ient/informers/externalversions/factory.go:141\\\\nI1210 12:16:10.769217 5964 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 12:16:10.769257 5964 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 12:16:10.769285 5964 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 12:16:10.769302 5964 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 12:16:10.769383 5964 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1210 12:16:10.769408 5964 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1210 12:16:10.769451 5964 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1210 12:16:10.769334 5964 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 12:16:10.769807 5964 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 12:16:10.769820 5964 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 12:16:10.769834 5964 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1210 12:16:10.769853 5964 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 12:16:10.769902 5964 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1210 12:16:10.770163 5964 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.109442 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.112800 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d5s24_b434fe8e-c4c2-4979-a9b6-8561523c2d9d/ovnkube-controller/0.log" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.114858 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" event={"ID":"b434fe8e-c4c2-4979-a9b6-8561523c2d9d","Type":"ContainerStarted","Data":"8d09cb58430de4c8d23c0dc3bbe3de52bdf4b79da14b87e8ac847bc47439a5a9"} Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.115256 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.118700 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" event={"ID":"bf732b59-88ab-4673-9d0c-e479b9138b30","Type":"ContainerStarted","Data":"6885c72d7b303bd0ad949b6b9f582f3c1a503d3ac4d1871ce164351cc5231233"} Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.124848 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.137450 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.149337 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.159458 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.159485 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.159495 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.159512 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.159523 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:12Z","lastTransitionTime":"2025-12-10T12:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.163665 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.172719 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400561fd4aa4adc6f0733a71b55aff1a6163a71bf34147aeeb41f95001d8222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.195490 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d09cb58430de4c8d23c0dc3bbe3de52bdf4b79da14b87e8ac847bc47439a5a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5540d337c2977642446fd1ebb59fb99e1486588b986e1fb9bac6f0b95bebc05d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:10Z\\\",\\\"message\\\":\\\"ient/informers/externalversions/factory.go:141\\\\nI1210 12:16:10.769217 5964 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 12:16:10.769257 5964 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 12:16:10.769285 5964 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 12:16:10.769302 5964 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 12:16:10.769383 5964 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1210 12:16:10.769408 5964 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1210 12:16:10.769451 5964 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1210 12:16:10.769334 5964 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 12:16:10.769807 5964 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 12:16:10.769820 5964 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 12:16:10.769834 5964 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1210 12:16:10.769853 5964 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 12:16:10.769902 5964 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1210 12:16:10.770163 5964 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.207337 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.217187 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.228323 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.236584 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.250409 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6885c72d7b303bd0ad949b6b9f582f3c1a503d3ac4d1871ce164351cc5231233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.261726 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.261920 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.262059 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.262156 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.262253 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:12Z","lastTransitionTime":"2025-12-10T12:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.265210 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.277859 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.290627 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.299575 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.364614 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.364648 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.364657 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.364669 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.364679 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:12Z","lastTransitionTime":"2025-12-10T12:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.466616 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.466668 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.466686 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.466712 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.466731 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:12Z","lastTransitionTime":"2025-12-10T12:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.516504 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:12 crc kubenswrapper[4689]: E1210 12:16:12.516768 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.534319 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.546795 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.561126 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.569042 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.569099 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.569114 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.569143 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.569157 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:12Z","lastTransitionTime":"2025-12-10T12:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.573638 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400561fd4aa4adc6f0733a71b55aff1a6163a71bf34147aeeb41f95001d8222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.589880 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.604302 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.620880 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.635586 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.652607 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6885c72d7b303bd0ad949b6b9f582f3c1a503d3ac4d1871ce164351cc5231233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.671610 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.671679 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.671702 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.671735 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.671759 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:12Z","lastTransitionTime":"2025-12-10T12:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.676932 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d09cb58430de4c8d23c0dc3bbe3de52bdf4b79da14b87e8ac847bc47439a5a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5540d337c2977642446fd1ebb59fb99e1486588b986e1fb9bac6f0b95bebc05d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:10Z\\\",\\\"message\\\":\\\"ient/informers/externalversions/factory.go:141\\\\nI1210 12:16:10.769217 5964 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 12:16:10.769257 5964 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 12:16:10.769285 5964 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 12:16:10.769302 5964 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 12:16:10.769383 5964 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1210 12:16:10.769408 5964 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1210 12:16:10.769451 5964 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1210 12:16:10.769334 5964 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 12:16:10.769807 5964 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 12:16:10.769820 5964 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 12:16:10.769834 5964 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1210 12:16:10.769853 5964 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 12:16:10.769902 5964 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1210 12:16:10.770163 5964 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.692356 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.709163 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.724273 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.736181 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:12Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.774141 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.774195 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.774205 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.774224 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.774236 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:12Z","lastTransitionTime":"2025-12-10T12:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.877237 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.877303 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.877322 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.877346 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.877364 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:12Z","lastTransitionTime":"2025-12-10T12:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.980497 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.980545 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.980561 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.980584 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:12 crc kubenswrapper[4689]: I1210 12:16:12.980603 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:12Z","lastTransitionTime":"2025-12-10T12:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.083646 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.083696 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.083709 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.083728 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.083739 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:13Z","lastTransitionTime":"2025-12-10T12:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.126727 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d5s24_b434fe8e-c4c2-4979-a9b6-8561523c2d9d/ovnkube-controller/1.log" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.127743 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d5s24_b434fe8e-c4c2-4979-a9b6-8561523c2d9d/ovnkube-controller/0.log" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.136102 4689 generic.go:334] "Generic (PLEG): container finished" podID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerID="8d09cb58430de4c8d23c0dc3bbe3de52bdf4b79da14b87e8ac847bc47439a5a9" exitCode=1 Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.136172 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" event={"ID":"b434fe8e-c4c2-4979-a9b6-8561523c2d9d","Type":"ContainerDied","Data":"8d09cb58430de4c8d23c0dc3bbe3de52bdf4b79da14b87e8ac847bc47439a5a9"} Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.136246 4689 scope.go:117] "RemoveContainer" containerID="5540d337c2977642446fd1ebb59fb99e1486588b986e1fb9bac6f0b95bebc05d" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.138127 4689 scope.go:117] "RemoveContainer" containerID="8d09cb58430de4c8d23c0dc3bbe3de52bdf4b79da14b87e8ac847bc47439a5a9" Dec 10 12:16:13 crc kubenswrapper[4689]: E1210 12:16:13.138482 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-d5s24_openshift-ovn-kubernetes(b434fe8e-c4c2-4979-a9b6-8561523c2d9d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.156549 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:13Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.168090 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:13Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.182353 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:13Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.190740 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.190771 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.190781 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.190799 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.190814 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:13Z","lastTransitionTime":"2025-12-10T12:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.195095 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:13Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.208313 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:13Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.220472 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400561fd4aa4adc6f0733a71b55aff1a6163a71bf34147aeeb41f95001d8222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:13Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.243455 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:13Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.260423 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:13Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.272341 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:16:13 crc kubenswrapper[4689]: E1210 12:16:13.272681 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:16:29.272657473 +0000 UTC m=+57.060738621 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.273000 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:13 crc kubenswrapper[4689]: E1210 12:16:13.273044 4689 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.273058 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:13 crc kubenswrapper[4689]: E1210 12:16:13.273103 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 12:16:29.273089703 +0000 UTC m=+57.061170861 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 12:16:13 crc kubenswrapper[4689]: E1210 12:16:13.273140 4689 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 12:16:13 crc kubenswrapper[4689]: E1210 12:16:13.273178 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 12:16:29.273167545 +0000 UTC m=+57.061248683 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.276337 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:13Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.287582 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:13Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.292722 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.292756 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.292769 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.292788 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.292800 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:13Z","lastTransitionTime":"2025-12-10T12:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.301633 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6885c72d7b303bd0ad949b6b9f582f3c1a503d3ac4d1871ce164351cc5231233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:13Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.319913 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d09cb58430de4c8d23c0dc3bbe3de52bdf4b79da14b87e8ac847bc47439a5a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5540d337c2977642446fd1ebb59fb99e1486588b986e1fb9bac6f0b95bebc05d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:10Z\\\",\\\"message\\\":\\\"ient/informers/externalversions/factory.go:141\\\\nI1210 12:16:10.769217 5964 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 12:16:10.769257 5964 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 12:16:10.769285 5964 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 12:16:10.769302 5964 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 12:16:10.769383 5964 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1210 12:16:10.769408 5964 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1210 12:16:10.769451 5964 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1210 12:16:10.769334 5964 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 12:16:10.769807 5964 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 12:16:10.769820 5964 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 12:16:10.769834 5964 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1210 12:16:10.769853 5964 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 12:16:10.769902 5964 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1210 12:16:10.770163 5964 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d09cb58430de4c8d23c0dc3bbe3de52bdf4b79da14b87e8ac847bc47439a5a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"message\\\":\\\"2:16:12.092815 6107 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 12:16:12.092841 6107 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 12:16:12.092842 6107 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 12:16:12.092857 6107 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1210 12:16:12.092838 6107 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1210 12:16:12.092885 6107 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1210 12:16:12.092884 6107 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 12:16:12.092870 6107 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 12:16:12.092906 6107 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 12:16:12.092930 6107 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 12:16:12.092937 6107 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 12:16:12.092944 6107 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 12:16:12.092954 6107 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 12:16:12.092964 6107 factory.go:656] Stopping watch factory\\\\nI1210 12:16:12.092962 6107 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 12:16:12.092991 6107 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:13Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.330775 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:13Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.342004 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:13Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.374321 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.374430 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:13 crc kubenswrapper[4689]: E1210 12:16:13.374504 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 12:16:13 crc kubenswrapper[4689]: E1210 12:16:13.374544 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 12:16:13 crc kubenswrapper[4689]: E1210 12:16:13.374567 4689 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 12:16:13 crc kubenswrapper[4689]: E1210 12:16:13.374627 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 12:16:13 crc kubenswrapper[4689]: E1210 12:16:13.374646 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 12:16:29.374623891 +0000 UTC m=+57.162705059 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 12:16:13 crc kubenswrapper[4689]: E1210 12:16:13.374656 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 12:16:13 crc kubenswrapper[4689]: E1210 12:16:13.374680 4689 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 12:16:13 crc kubenswrapper[4689]: E1210 12:16:13.374746 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 12:16:29.374722584 +0000 UTC m=+57.162803762 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.395432 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.395508 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.395532 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.395563 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.395586 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:13Z","lastTransitionTime":"2025-12-10T12:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.497178 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.497448 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:13 crc kubenswrapper[4689]: E1210 12:16:13.497607 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:16:13 crc kubenswrapper[4689]: E1210 12:16:13.497784 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.498078 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.498119 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.498135 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.498162 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.498192 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:13Z","lastTransitionTime":"2025-12-10T12:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.601187 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.601246 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.601322 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.601352 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.601370 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:13Z","lastTransitionTime":"2025-12-10T12:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.704020 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.704072 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.704089 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.704111 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.704129 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:13Z","lastTransitionTime":"2025-12-10T12:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.808011 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.808089 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.808113 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.808144 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.808166 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:13Z","lastTransitionTime":"2025-12-10T12:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.911639 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.911706 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.911729 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.911753 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:13 crc kubenswrapper[4689]: I1210 12:16:13.911774 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:13Z","lastTransitionTime":"2025-12-10T12:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.014528 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.014613 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.014638 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.014670 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.014696 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:14Z","lastTransitionTime":"2025-12-10T12:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.116758 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.116797 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.116807 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.116820 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.116830 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:14Z","lastTransitionTime":"2025-12-10T12:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.139944 4689 scope.go:117] "RemoveContainer" containerID="8d09cb58430de4c8d23c0dc3bbe3de52bdf4b79da14b87e8ac847bc47439a5a9" Dec 10 12:16:14 crc kubenswrapper[4689]: E1210 12:16:14.140152 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-d5s24_openshift-ovn-kubernetes(b434fe8e-c4c2-4979-a9b6-8561523c2d9d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.150825 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:14Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.162553 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:14Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.171206 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400561fd4aa4adc6f0733a71b55aff1a6163a71bf34147aeeb41f95001d8222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:14Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.182674 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:14Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.194377 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:14Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.204475 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:14Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.217786 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6885c72d7b303bd0ad949b6b9f582f3c1a503d3ac4d1871ce164351cc5231233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:14Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.219468 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.219497 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.219505 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.219520 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.219529 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:14Z","lastTransitionTime":"2025-12-10T12:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.244902 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d09cb58430de4c8d23c0dc3bbe3de52bdf4b79da14b87e8ac847bc47439a5a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d09cb58430de4c8d23c0dc3bbe3de52bdf4b79da14b87e8ac847bc47439a5a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"message\\\":\\\"2:16:12.092815 6107 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 12:16:12.092841 6107 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 12:16:12.092842 6107 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 12:16:12.092857 6107 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1210 12:16:12.092838 6107 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1210 12:16:12.092885 6107 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1210 12:16:12.092884 6107 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 12:16:12.092870 6107 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 12:16:12.092906 6107 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 12:16:12.092930 6107 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 12:16:12.092937 6107 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 12:16:12.092944 6107 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 12:16:12.092954 6107 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 12:16:12.092964 6107 factory.go:656] Stopping watch factory\\\\nI1210 12:16:12.092962 6107 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 12:16:12.092991 6107 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-d5s24_openshift-ovn-kubernetes(b434fe8e-c4c2-4979-a9b6-8561523c2d9d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:14Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.263182 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:14Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.278340 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:14Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.291210 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:14Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.306089 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:14Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.318099 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:14Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.321808 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.321860 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.321875 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.321895 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.321908 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:14Z","lastTransitionTime":"2025-12-10T12:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.330006 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:14Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.424123 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.424173 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.424187 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.424206 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.424221 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:14Z","lastTransitionTime":"2025-12-10T12:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.497461 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:14 crc kubenswrapper[4689]: E1210 12:16:14.497659 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.527322 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.527370 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.527381 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.527398 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.527411 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:14Z","lastTransitionTime":"2025-12-10T12:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.578807 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh"] Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.579619 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.582060 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.583089 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.607222 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:14Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.623086 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:14Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.630482 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.630524 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.630535 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.630553 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.630566 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:14Z","lastTransitionTime":"2025-12-10T12:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.640340 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:14Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.655721 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400561fd4aa4adc6f0733a71b55aff1a6163a71bf34147aeeb41f95001d8222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:14Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.673937 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:14Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.686785 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7908b49b-e3d2-4d30-95e0-467f5542d445-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-6ffqh\" (UID: \"7908b49b-e3d2-4d30-95e0-467f5542d445\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.686858 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7908b49b-e3d2-4d30-95e0-467f5542d445-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-6ffqh\" (UID: \"7908b49b-e3d2-4d30-95e0-467f5542d445\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.686895 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t2fq\" (UniqueName: \"kubernetes.io/projected/7908b49b-e3d2-4d30-95e0-467f5542d445-kube-api-access-8t2fq\") pod \"ovnkube-control-plane-749d76644c-6ffqh\" (UID: \"7908b49b-e3d2-4d30-95e0-467f5542d445\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.686962 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7908b49b-e3d2-4d30-95e0-467f5542d445-env-overrides\") pod \"ovnkube-control-plane-749d76644c-6ffqh\" (UID: \"7908b49b-e3d2-4d30-95e0-467f5542d445\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.692799 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:14Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.709667 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:14Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.733117 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.733181 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.733207 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.733238 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.733261 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:14Z","lastTransitionTime":"2025-12-10T12:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.735205 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6885c72d7b303bd0ad949b6b9f582f3c1a503d3ac4d1871ce164351cc5231233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:14Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.768471 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d09cb58430de4c8d23c0dc3bbe3de52bdf4b79da14b87e8ac847bc47439a5a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d09cb58430de4c8d23c0dc3bbe3de52bdf4b79da14b87e8ac847bc47439a5a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"message\\\":\\\"2:16:12.092815 6107 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 12:16:12.092841 6107 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 12:16:12.092842 6107 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 12:16:12.092857 6107 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1210 12:16:12.092838 6107 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1210 12:16:12.092885 6107 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1210 12:16:12.092884 6107 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 12:16:12.092870 6107 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 12:16:12.092906 6107 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 12:16:12.092930 6107 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 12:16:12.092937 6107 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 12:16:12.092944 6107 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 12:16:12.092954 6107 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 12:16:12.092964 6107 factory.go:656] Stopping watch factory\\\\nI1210 12:16:12.092962 6107 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 12:16:12.092991 6107 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-d5s24_openshift-ovn-kubernetes(b434fe8e-c4c2-4979-a9b6-8561523c2d9d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:14Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.787902 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7908b49b-e3d2-4d30-95e0-467f5542d445-env-overrides\") pod \"ovnkube-control-plane-749d76644c-6ffqh\" (UID: \"7908b49b-e3d2-4d30-95e0-467f5542d445\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.788036 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7908b49b-e3d2-4d30-95e0-467f5542d445-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-6ffqh\" (UID: \"7908b49b-e3d2-4d30-95e0-467f5542d445\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.788111 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7908b49b-e3d2-4d30-95e0-467f5542d445-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-6ffqh\" (UID: \"7908b49b-e3d2-4d30-95e0-467f5542d445\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.788163 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t2fq\" (UniqueName: \"kubernetes.io/projected/7908b49b-e3d2-4d30-95e0-467f5542d445-kube-api-access-8t2fq\") pod \"ovnkube-control-plane-749d76644c-6ffqh\" (UID: \"7908b49b-e3d2-4d30-95e0-467f5542d445\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.788871 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7908b49b-e3d2-4d30-95e0-467f5542d445-env-overrides\") pod \"ovnkube-control-plane-749d76644c-6ffqh\" (UID: \"7908b49b-e3d2-4d30-95e0-467f5542d445\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.789378 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7908b49b-e3d2-4d30-95e0-467f5542d445-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-6ffqh\" (UID: \"7908b49b-e3d2-4d30-95e0-467f5542d445\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.797947 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:14Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.808839 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7908b49b-e3d2-4d30-95e0-467f5542d445-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-6ffqh\" (UID: \"7908b49b-e3d2-4d30-95e0-467f5542d445\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.819507 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7908b49b-e3d2-4d30-95e0-467f5542d445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6ffqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:14Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.821133 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t2fq\" (UniqueName: \"kubernetes.io/projected/7908b49b-e3d2-4d30-95e0-467f5542d445-kube-api-access-8t2fq\") pod \"ovnkube-control-plane-749d76644c-6ffqh\" (UID: \"7908b49b-e3d2-4d30-95e0-467f5542d445\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.836326 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.836374 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.836393 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.836417 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.836436 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:14Z","lastTransitionTime":"2025-12-10T12:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.840395 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:14Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.859570 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:14Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.878404 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:14Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.894775 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:14Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.899919 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" Dec 10 12:16:14 crc kubenswrapper[4689]: W1210 12:16:14.919583 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7908b49b_e3d2_4d30_95e0_467f5542d445.slice/crio-c9905102441e1b1771477f359f2262dccda164f7e13723795965b387f8594110 WatchSource:0}: Error finding container c9905102441e1b1771477f359f2262dccda164f7e13723795965b387f8594110: Status 404 returned error can't find the container with id c9905102441e1b1771477f359f2262dccda164f7e13723795965b387f8594110 Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.939004 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.939074 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.939099 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.939132 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:14 crc kubenswrapper[4689]: I1210 12:16:14.939158 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:14Z","lastTransitionTime":"2025-12-10T12:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.041451 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.041504 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.041521 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.041543 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.041559 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:15Z","lastTransitionTime":"2025-12-10T12:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.143798 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d5s24_b434fe8e-c4c2-4979-a9b6-8561523c2d9d/ovnkube-controller/1.log" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.144465 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.144495 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.144504 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.144519 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.144528 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:15Z","lastTransitionTime":"2025-12-10T12:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.148466 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" event={"ID":"7908b49b-e3d2-4d30-95e0-467f5542d445","Type":"ContainerStarted","Data":"c9905102441e1b1771477f359f2262dccda164f7e13723795965b387f8594110"} Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.247874 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.247914 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.247925 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.247939 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.247950 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:15Z","lastTransitionTime":"2025-12-10T12:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.350990 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.351019 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.351030 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.351050 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.351061 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:15Z","lastTransitionTime":"2025-12-10T12:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.453411 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.453455 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.453470 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.453489 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.453506 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:15Z","lastTransitionTime":"2025-12-10T12:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.498222 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.498234 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:15 crc kubenswrapper[4689]: E1210 12:16:15.498488 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:16:15 crc kubenswrapper[4689]: E1210 12:16:15.498534 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.556720 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.556791 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.556805 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.556831 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.556847 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:15Z","lastTransitionTime":"2025-12-10T12:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.660506 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.660577 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.660594 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.660623 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.660642 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:15Z","lastTransitionTime":"2025-12-10T12:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.733293 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-2h8hs"] Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.733733 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:15 crc kubenswrapper[4689]: E1210 12:16:15.733780 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.757283 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:15Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.763392 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.763445 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.763460 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.763482 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.763497 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:15Z","lastTransitionTime":"2025-12-10T12:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.779582 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:15Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.796833 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7908b49b-e3d2-4d30-95e0-467f5542d445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6ffqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:15Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.813581 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:15Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.829549 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:15Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.845551 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400561fd4aa4adc6f0733a71b55aff1a6163a71bf34147aeeb41f95001d8222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:15Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.864281 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2h8hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcnt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcnt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2h8hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:15Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.868239 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.868280 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.868293 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.868324 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.868339 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:15Z","lastTransitionTime":"2025-12-10T12:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.888072 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:15Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.900487 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8-metrics-certs\") pod \"network-metrics-daemon-2h8hs\" (UID: \"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8\") " pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.900599 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcnt5\" (UniqueName: \"kubernetes.io/projected/3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8-kube-api-access-kcnt5\") pod \"network-metrics-daemon-2h8hs\" (UID: \"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8\") " pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.905140 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:15Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.927666 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:15Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.956303 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6885c72d7b303bd0ad949b6b9f582f3c1a503d3ac4d1871ce164351cc5231233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:15Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.971811 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.971856 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.971880 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.971902 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.971916 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:15Z","lastTransitionTime":"2025-12-10T12:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:15 crc kubenswrapper[4689]: I1210 12:16:15.989435 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d09cb58430de4c8d23c0dc3bbe3de52bdf4b79da14b87e8ac847bc47439a5a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d09cb58430de4c8d23c0dc3bbe3de52bdf4b79da14b87e8ac847bc47439a5a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"message\\\":\\\"2:16:12.092815 6107 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 12:16:12.092841 6107 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 12:16:12.092842 6107 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 12:16:12.092857 6107 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1210 12:16:12.092838 6107 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1210 12:16:12.092885 6107 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1210 12:16:12.092884 6107 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 12:16:12.092870 6107 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 12:16:12.092906 6107 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 12:16:12.092930 6107 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 12:16:12.092937 6107 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 12:16:12.092944 6107 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 12:16:12.092954 6107 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 12:16:12.092964 6107 factory.go:656] Stopping watch factory\\\\nI1210 12:16:12.092962 6107 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 12:16:12.092991 6107 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-d5s24_openshift-ovn-kubernetes(b434fe8e-c4c2-4979-a9b6-8561523c2d9d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:15Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.001284 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8-metrics-certs\") pod \"network-metrics-daemon-2h8hs\" (UID: \"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8\") " pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.001425 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcnt5\" (UniqueName: \"kubernetes.io/projected/3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8-kube-api-access-kcnt5\") pod \"network-metrics-daemon-2h8hs\" (UID: \"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8\") " pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:16 crc kubenswrapper[4689]: E1210 12:16:16.001480 4689 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 12:16:16 crc kubenswrapper[4689]: E1210 12:16:16.001565 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8-metrics-certs podName:3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8 nodeName:}" failed. No retries permitted until 2025-12-10 12:16:16.501542866 +0000 UTC m=+44.289624014 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8-metrics-certs") pod "network-metrics-daemon-2h8hs" (UID: "3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.006311 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:16Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.026142 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcnt5\" (UniqueName: \"kubernetes.io/projected/3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8-kube-api-access-kcnt5\") pod \"network-metrics-daemon-2h8hs\" (UID: \"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8\") " pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.035845 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:16Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.052414 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:16Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.066605 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:16Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.074576 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.074623 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.074633 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.074651 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.074664 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:16Z","lastTransitionTime":"2025-12-10T12:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.154139 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" event={"ID":"7908b49b-e3d2-4d30-95e0-467f5542d445","Type":"ContainerStarted","Data":"e5081b88bdb2e39135bca6cf70283aeb3f9541d28ab7336c5fa9d17f28496f7e"} Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.154258 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" event={"ID":"7908b49b-e3d2-4d30-95e0-467f5542d445","Type":"ContainerStarted","Data":"34ba576e7b07f1a070450f2a750ed7f9e1ed7b3023f13340ffadbdbbc2f564bc"} Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.191093 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.191137 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.191153 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.191174 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.191191 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:16Z","lastTransitionTime":"2025-12-10T12:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.195705 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6885c72d7b303bd0ad949b6b9f582f3c1a503d3ac4d1871ce164351cc5231233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:16Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.221057 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d09cb58430de4c8d23c0dc3bbe3de52bdf4b79da14b87e8ac847bc47439a5a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d09cb58430de4c8d23c0dc3bbe3de52bdf4b79da14b87e8ac847bc47439a5a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"message\\\":\\\"2:16:12.092815 6107 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 12:16:12.092841 6107 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 12:16:12.092842 6107 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 12:16:12.092857 6107 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1210 12:16:12.092838 6107 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1210 12:16:12.092885 6107 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1210 12:16:12.092884 6107 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 12:16:12.092870 6107 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 12:16:12.092906 6107 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 12:16:12.092930 6107 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 12:16:12.092937 6107 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 12:16:12.092944 6107 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 12:16:12.092954 6107 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 12:16:12.092964 6107 factory.go:656] Stopping watch factory\\\\nI1210 12:16:12.092962 6107 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 12:16:12.092991 6107 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-d5s24_openshift-ovn-kubernetes(b434fe8e-c4c2-4979-a9b6-8561523c2d9d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:16Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.234758 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:16Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.252264 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:16Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.268680 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:16Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.283911 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:16Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.294150 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.294183 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.294191 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.294207 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.294218 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:16Z","lastTransitionTime":"2025-12-10T12:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.304248 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:16Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.321907 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:16Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.337515 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7908b49b-e3d2-4d30-95e0-467f5542d445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ba576e7b07f1a070450f2a750ed7f9e1ed7b3023f13340ffadbdbbc2f564bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5081b88bdb2e39135bca6cf70283aeb3f9541d28ab7336c5fa9d17f28496f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6ffqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:16Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.352246 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:16Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.366055 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:16Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.383510 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400561fd4aa4adc6f0733a71b55aff1a6163a71bf34147aeeb41f95001d8222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:16Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.397151 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.397236 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.397262 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.397294 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.397317 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:16Z","lastTransitionTime":"2025-12-10T12:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.401917 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2h8hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcnt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcnt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2h8hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:16Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.425030 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:16Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.446774 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:16Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.468050 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:16Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.497533 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:16 crc kubenswrapper[4689]: E1210 12:16:16.497735 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.500249 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.500314 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.500334 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.500359 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.500380 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:16Z","lastTransitionTime":"2025-12-10T12:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.504943 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8-metrics-certs\") pod \"network-metrics-daemon-2h8hs\" (UID: \"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8\") " pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:16 crc kubenswrapper[4689]: E1210 12:16:16.505145 4689 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 12:16:16 crc kubenswrapper[4689]: E1210 12:16:16.505212 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8-metrics-certs podName:3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8 nodeName:}" failed. No retries permitted until 2025-12-10 12:16:17.505190047 +0000 UTC m=+45.293271225 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8-metrics-certs") pod "network-metrics-daemon-2h8hs" (UID: "3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.604047 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.604113 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.604135 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.604165 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.604188 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:16Z","lastTransitionTime":"2025-12-10T12:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.706600 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.706644 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.706656 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.706707 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.706720 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:16Z","lastTransitionTime":"2025-12-10T12:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.808839 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.808904 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.808927 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.808954 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.809006 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:16Z","lastTransitionTime":"2025-12-10T12:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.912100 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.912173 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.912202 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.912232 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:16 crc kubenswrapper[4689]: I1210 12:16:16.912254 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:16Z","lastTransitionTime":"2025-12-10T12:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.014607 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.014681 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.014703 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.014731 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.014751 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:17Z","lastTransitionTime":"2025-12-10T12:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.117739 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.117787 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.117801 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.117820 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.117832 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:17Z","lastTransitionTime":"2025-12-10T12:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.220686 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.220758 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.220808 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.220824 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.220836 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:17Z","lastTransitionTime":"2025-12-10T12:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.324181 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.324265 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.324282 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.324306 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.324327 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:17Z","lastTransitionTime":"2025-12-10T12:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.427027 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.427088 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.427100 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.427122 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.427138 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:17Z","lastTransitionTime":"2025-12-10T12:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.498222 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.498223 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:17 crc kubenswrapper[4689]: E1210 12:16:17.498385 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:16:17 crc kubenswrapper[4689]: E1210 12:16:17.498536 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.498240 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:17 crc kubenswrapper[4689]: E1210 12:16:17.498752 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.515708 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8-metrics-certs\") pod \"network-metrics-daemon-2h8hs\" (UID: \"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8\") " pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:17 crc kubenswrapper[4689]: E1210 12:16:17.515893 4689 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 12:16:17 crc kubenswrapper[4689]: E1210 12:16:17.516036 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8-metrics-certs podName:3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8 nodeName:}" failed. No retries permitted until 2025-12-10 12:16:19.515966448 +0000 UTC m=+47.304047626 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8-metrics-certs") pod "network-metrics-daemon-2h8hs" (UID: "3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.531113 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.531164 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.531182 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.531240 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.531259 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:17Z","lastTransitionTime":"2025-12-10T12:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.634486 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.634597 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.634620 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.634653 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.634677 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:17Z","lastTransitionTime":"2025-12-10T12:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.737421 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.737483 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.737504 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.737533 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.737556 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:17Z","lastTransitionTime":"2025-12-10T12:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.840623 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.840677 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.840691 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.840709 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.840723 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:17Z","lastTransitionTime":"2025-12-10T12:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.943324 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.943365 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.943376 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.943393 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:17 crc kubenswrapper[4689]: I1210 12:16:17.943410 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:17Z","lastTransitionTime":"2025-12-10T12:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.046047 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.046105 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.046121 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.046144 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.046157 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:18Z","lastTransitionTime":"2025-12-10T12:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.149091 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.149157 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.149175 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.149204 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.149223 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:18Z","lastTransitionTime":"2025-12-10T12:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.252297 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.252360 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.252380 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.252403 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.252429 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:18Z","lastTransitionTime":"2025-12-10T12:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.355341 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.355409 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.355427 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.355451 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.355680 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:18Z","lastTransitionTime":"2025-12-10T12:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.458936 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.459041 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.459055 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.459075 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.459091 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:18Z","lastTransitionTime":"2025-12-10T12:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.497668 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:18 crc kubenswrapper[4689]: E1210 12:16:18.497887 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.561398 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.561453 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.561471 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.561494 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.561511 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:18Z","lastTransitionTime":"2025-12-10T12:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.664479 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.664723 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.664740 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.664766 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.664784 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:18Z","lastTransitionTime":"2025-12-10T12:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.768497 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.768575 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.768594 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.768620 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.768638 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:18Z","lastTransitionTime":"2025-12-10T12:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.871785 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.871833 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.871848 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.871864 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.871875 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:18Z","lastTransitionTime":"2025-12-10T12:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.975192 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.975267 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.975292 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.975324 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:18 crc kubenswrapper[4689]: I1210 12:16:18.975347 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:18Z","lastTransitionTime":"2025-12-10T12:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.078821 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.078885 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.078908 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.078934 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.078952 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:19Z","lastTransitionTime":"2025-12-10T12:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.181651 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.181738 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.181759 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.181785 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.181802 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:19Z","lastTransitionTime":"2025-12-10T12:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.284758 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.284813 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.284831 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.284850 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.284865 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:19Z","lastTransitionTime":"2025-12-10T12:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.391142 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.391222 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.391241 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.391268 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.391287 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:19Z","lastTransitionTime":"2025-12-10T12:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.493102 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.493135 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.493144 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.493158 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.493166 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:19Z","lastTransitionTime":"2025-12-10T12:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.497551 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.497580 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:19 crc kubenswrapper[4689]: E1210 12:16:19.497638 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.497659 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:19 crc kubenswrapper[4689]: E1210 12:16:19.497831 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:16:19 crc kubenswrapper[4689]: E1210 12:16:19.497962 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.538252 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8-metrics-certs\") pod \"network-metrics-daemon-2h8hs\" (UID: \"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8\") " pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:19 crc kubenswrapper[4689]: E1210 12:16:19.538399 4689 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 12:16:19 crc kubenswrapper[4689]: E1210 12:16:19.538460 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8-metrics-certs podName:3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8 nodeName:}" failed. No retries permitted until 2025-12-10 12:16:23.53844432 +0000 UTC m=+51.326525478 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8-metrics-certs") pod "network-metrics-daemon-2h8hs" (UID: "3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.595249 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.595294 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.595306 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.595320 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.595330 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:19Z","lastTransitionTime":"2025-12-10T12:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.698227 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.698289 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.698306 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.698330 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.698348 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:19Z","lastTransitionTime":"2025-12-10T12:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.801686 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.801825 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.801844 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.801874 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.801892 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:19Z","lastTransitionTime":"2025-12-10T12:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.908753 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.908824 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.908844 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.908879 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:19 crc kubenswrapper[4689]: I1210 12:16:19.908909 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:19Z","lastTransitionTime":"2025-12-10T12:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.012507 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.012572 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.012594 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.012625 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.012647 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:20Z","lastTransitionTime":"2025-12-10T12:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.116212 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.116336 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.116363 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.116394 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.116417 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:20Z","lastTransitionTime":"2025-12-10T12:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.218804 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.218899 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.218924 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.218948 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.218996 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:20Z","lastTransitionTime":"2025-12-10T12:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.323389 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.323477 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.323502 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.323530 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.323555 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:20Z","lastTransitionTime":"2025-12-10T12:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.427378 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.427462 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.427480 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.427506 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.427523 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:20Z","lastTransitionTime":"2025-12-10T12:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.498037 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:20 crc kubenswrapper[4689]: E1210 12:16:20.498221 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.530552 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.530619 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.530639 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.530672 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.530695 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:20Z","lastTransitionTime":"2025-12-10T12:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.634027 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.634100 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.634116 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.634142 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.634158 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:20Z","lastTransitionTime":"2025-12-10T12:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.737354 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.737428 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.737507 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.737535 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.737552 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:20Z","lastTransitionTime":"2025-12-10T12:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.840336 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.840426 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.840455 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.840488 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.840511 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:20Z","lastTransitionTime":"2025-12-10T12:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.943461 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.943575 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.943598 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.943627 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:20 crc kubenswrapper[4689]: I1210 12:16:20.943647 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:20Z","lastTransitionTime":"2025-12-10T12:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.046895 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.047017 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.047035 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.047060 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.047081 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:21Z","lastTransitionTime":"2025-12-10T12:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.150466 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.150612 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.150636 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.150667 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.150689 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:21Z","lastTransitionTime":"2025-12-10T12:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.254050 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.254142 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.254162 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.254191 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.254212 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:21Z","lastTransitionTime":"2025-12-10T12:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.358167 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.358239 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.358263 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.358292 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.358313 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:21Z","lastTransitionTime":"2025-12-10T12:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.462522 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.462589 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.462608 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.462633 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.462650 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:21Z","lastTransitionTime":"2025-12-10T12:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.497118 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.497173 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.497195 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:21 crc kubenswrapper[4689]: E1210 12:16:21.497315 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:16:21 crc kubenswrapper[4689]: E1210 12:16:21.497433 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:16:21 crc kubenswrapper[4689]: E1210 12:16:21.497552 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.565576 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.565641 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.565660 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.565686 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.565702 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:21Z","lastTransitionTime":"2025-12-10T12:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.669957 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.670059 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.670120 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.670151 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.670172 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:21Z","lastTransitionTime":"2025-12-10T12:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.772894 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.772961 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.773036 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.773105 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.773130 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:21Z","lastTransitionTime":"2025-12-10T12:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.876134 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.876200 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.876242 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.876274 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.876297 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:21Z","lastTransitionTime":"2025-12-10T12:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.979249 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.979303 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.979318 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.979339 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:21 crc kubenswrapper[4689]: I1210 12:16:21.979354 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:21Z","lastTransitionTime":"2025-12-10T12:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.082894 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.082945 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.082963 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.083019 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.083041 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:22Z","lastTransitionTime":"2025-12-10T12:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.160255 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.160322 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.160350 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.160378 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.160430 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:22Z","lastTransitionTime":"2025-12-10T12:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:22 crc kubenswrapper[4689]: E1210 12:16:22.182247 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.187274 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.187375 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.187448 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.187521 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.187550 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:22Z","lastTransitionTime":"2025-12-10T12:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:22 crc kubenswrapper[4689]: E1210 12:16:22.207706 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.213704 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.213809 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.213829 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.213854 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.213871 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:22Z","lastTransitionTime":"2025-12-10T12:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:22 crc kubenswrapper[4689]: E1210 12:16:22.233752 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.238568 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.238621 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.238640 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.238662 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.238678 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:22Z","lastTransitionTime":"2025-12-10T12:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:22 crc kubenswrapper[4689]: E1210 12:16:22.257908 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.263070 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.263125 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.263142 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.263165 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.263182 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:22Z","lastTransitionTime":"2025-12-10T12:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:22 crc kubenswrapper[4689]: E1210 12:16:22.282751 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: E1210 12:16:22.283055 4689 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.285592 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.285650 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.285668 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.285693 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.285712 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:22Z","lastTransitionTime":"2025-12-10T12:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.388859 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.388906 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.388922 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.388944 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.388961 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:22Z","lastTransitionTime":"2025-12-10T12:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.492723 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.492803 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.492827 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.492859 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.492879 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:22Z","lastTransitionTime":"2025-12-10T12:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.498090 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:22 crc kubenswrapper[4689]: E1210 12:16:22.498260 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.515909 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.531728 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.548716 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7908b49b-e3d2-4d30-95e0-467f5542d445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ba576e7b07f1a070450f2a750ed7f9e1ed7b3023f13340ffadbdbbc2f564bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5081b88bdb2e39135bca6cf70283aeb3f9541d28ab7336c5fa9d17f28496f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6ffqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.561701 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.573459 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.586638 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.595561 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.595699 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.595722 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.595745 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.595800 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:22Z","lastTransitionTime":"2025-12-10T12:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.599262 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.616849 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.624241 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.633862 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400561fd4aa4adc6f0733a71b55aff1a6163a71bf34147aeeb41f95001d8222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.639201 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.653259 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2h8hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcnt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcnt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2h8hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.668343 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.685846 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.698170 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.698240 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.698264 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.698296 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.698322 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:22Z","lastTransitionTime":"2025-12-10T12:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.701110 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.713223 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.732256 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6885c72d7b303bd0ad949b6b9f582f3c1a503d3ac4d1871ce164351cc5231233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.754623 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d09cb58430de4c8d23c0dc3bbe3de52bdf4b79da14b87e8ac847bc47439a5a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d09cb58430de4c8d23c0dc3bbe3de52bdf4b79da14b87e8ac847bc47439a5a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"message\\\":\\\"2:16:12.092815 6107 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 12:16:12.092841 6107 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 12:16:12.092842 6107 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 12:16:12.092857 6107 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1210 12:16:12.092838 6107 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1210 12:16:12.092885 6107 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1210 12:16:12.092884 6107 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 12:16:12.092870 6107 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 12:16:12.092906 6107 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 12:16:12.092930 6107 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 12:16:12.092937 6107 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 12:16:12.092944 6107 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 12:16:12.092954 6107 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 12:16:12.092964 6107 factory.go:656] Stopping watch factory\\\\nI1210 12:16:12.092962 6107 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 12:16:12.092991 6107 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-d5s24_openshift-ovn-kubernetes(b434fe8e-c4c2-4979-a9b6-8561523c2d9d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.766635 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.778429 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.794264 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6885c72d7b303bd0ad949b6b9f582f3c1a503d3ac4d1871ce164351cc5231233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.800791 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.800827 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.800836 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.800850 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.800860 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:22Z","lastTransitionTime":"2025-12-10T12:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.815522 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d09cb58430de4c8d23c0dc3bbe3de52bdf4b79da14b87e8ac847bc47439a5a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d09cb58430de4c8d23c0dc3bbe3de52bdf4b79da14b87e8ac847bc47439a5a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"message\\\":\\\"2:16:12.092815 6107 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 12:16:12.092841 6107 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 12:16:12.092842 6107 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 12:16:12.092857 6107 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1210 12:16:12.092838 6107 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1210 12:16:12.092885 6107 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1210 12:16:12.092884 6107 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 12:16:12.092870 6107 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 12:16:12.092906 6107 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 12:16:12.092930 6107 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 12:16:12.092937 6107 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 12:16:12.092944 6107 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 12:16:12.092954 6107 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 12:16:12.092964 6107 factory.go:656] Stopping watch factory\\\\nI1210 12:16:12.092962 6107 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 12:16:12.092991 6107 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-d5s24_openshift-ovn-kubernetes(b434fe8e-c4c2-4979-a9b6-8561523c2d9d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.829558 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.840762 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.853926 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.865688 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.878768 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7908b49b-e3d2-4d30-95e0-467f5542d445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ba576e7b07f1a070450f2a750ed7f9e1ed7b3023f13340ffadbdbbc2f564bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5081b88bdb2e39135bca6cf70283aeb3f9541d28ab7336c5fa9d17f28496f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6ffqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.890258 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.899241 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.902916 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.902960 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.902999 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.903017 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.903029 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:22Z","lastTransitionTime":"2025-12-10T12:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.910800 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.924897 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.936685 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400561fd4aa4adc6f0733a71b55aff1a6163a71bf34147aeeb41f95001d8222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.948151 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2h8hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcnt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcnt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2h8hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.964857 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f310ccc9-5093-44fe-8271-32ffed1e4596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be1e74c974ed4454098265113e97e5d5d27b2ddb6ff5d89cb891c1686290f8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c34a55854008d99289ec6ff7df4ab7b8c60658569094a88e6740ceb28714d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66af8611123a29e82941af3a1facb45f06c358ffb277bd037af034ae97364d56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2e1ebfdbacde06d36ebe61a04c2977a980def50e80715de5f8a9a292e661915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2e1ebfdbacde06d36ebe61a04c2977a980def50e80715de5f8a9a292e661915\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:22 crc kubenswrapper[4689]: I1210 12:16:22.980255 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:22Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.006171 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.006225 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.006241 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.006262 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.006276 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:23Z","lastTransitionTime":"2025-12-10T12:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.108667 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.108709 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.108921 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.109069 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.109136 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:23Z","lastTransitionTime":"2025-12-10T12:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.212113 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.212174 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.212187 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.212212 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.212227 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:23Z","lastTransitionTime":"2025-12-10T12:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.315124 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.315193 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.315213 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.315240 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.315257 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:23Z","lastTransitionTime":"2025-12-10T12:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.418358 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.418431 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.418455 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.418479 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.418497 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:23Z","lastTransitionTime":"2025-12-10T12:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.497439 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.497447 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:23 crc kubenswrapper[4689]: E1210 12:16:23.497660 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.497461 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:23 crc kubenswrapper[4689]: E1210 12:16:23.497835 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:16:23 crc kubenswrapper[4689]: E1210 12:16:23.497912 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.521497 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.521570 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.521595 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.521627 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.521652 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:23Z","lastTransitionTime":"2025-12-10T12:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.586870 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8-metrics-certs\") pod \"network-metrics-daemon-2h8hs\" (UID: \"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8\") " pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:23 crc kubenswrapper[4689]: E1210 12:16:23.587189 4689 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 12:16:23 crc kubenswrapper[4689]: E1210 12:16:23.587336 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8-metrics-certs podName:3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8 nodeName:}" failed. No retries permitted until 2025-12-10 12:16:31.587300183 +0000 UTC m=+59.375381361 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8-metrics-certs") pod "network-metrics-daemon-2h8hs" (UID: "3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.625667 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.625742 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.625765 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.625793 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.625814 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:23Z","lastTransitionTime":"2025-12-10T12:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.728334 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.728420 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.728431 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.728450 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.728462 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:23Z","lastTransitionTime":"2025-12-10T12:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.831479 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.831579 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.831602 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.831636 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.831662 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:23Z","lastTransitionTime":"2025-12-10T12:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.935517 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.935607 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.935632 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.935663 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:23 crc kubenswrapper[4689]: I1210 12:16:23.935684 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:23Z","lastTransitionTime":"2025-12-10T12:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.039624 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.039705 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.039790 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.039822 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.039845 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:24Z","lastTransitionTime":"2025-12-10T12:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.142834 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.142915 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.142933 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.142965 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.143024 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:24Z","lastTransitionTime":"2025-12-10T12:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.246113 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.246381 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.246393 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.246411 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.246426 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:24Z","lastTransitionTime":"2025-12-10T12:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.349117 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.349201 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.349229 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.349258 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.349276 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:24Z","lastTransitionTime":"2025-12-10T12:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.451786 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.451864 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.451890 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.451920 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.451939 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:24Z","lastTransitionTime":"2025-12-10T12:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.498070 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:24 crc kubenswrapper[4689]: E1210 12:16:24.498684 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.555232 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.555272 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.555293 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.555316 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.555333 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:24Z","lastTransitionTime":"2025-12-10T12:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.659276 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.659756 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.659921 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.660091 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.660220 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:24Z","lastTransitionTime":"2025-12-10T12:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.765570 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.765902 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.766056 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.766202 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.766440 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:24Z","lastTransitionTime":"2025-12-10T12:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.870006 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.870250 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.870367 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.870459 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.870534 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:24Z","lastTransitionTime":"2025-12-10T12:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.973531 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.973571 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.973585 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.973611 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:24 crc kubenswrapper[4689]: I1210 12:16:24.973625 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:24Z","lastTransitionTime":"2025-12-10T12:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.076094 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.076142 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.076152 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.076170 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.076215 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:25Z","lastTransitionTime":"2025-12-10T12:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.180210 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.180274 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.180286 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.180312 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.180324 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:25Z","lastTransitionTime":"2025-12-10T12:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.284237 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.284332 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.284355 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.284381 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.284398 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:25Z","lastTransitionTime":"2025-12-10T12:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.388006 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.388064 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.388080 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.388107 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.388124 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:25Z","lastTransitionTime":"2025-12-10T12:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.491157 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.491246 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.491271 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.491309 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.491331 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:25Z","lastTransitionTime":"2025-12-10T12:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.498151 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.498173 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:25 crc kubenswrapper[4689]: E1210 12:16:25.498328 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.498395 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:25 crc kubenswrapper[4689]: E1210 12:16:25.499048 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:16:25 crc kubenswrapper[4689]: E1210 12:16:25.499254 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.500894 4689 scope.go:117] "RemoveContainer" containerID="8d09cb58430de4c8d23c0dc3bbe3de52bdf4b79da14b87e8ac847bc47439a5a9" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.594196 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.594263 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.594282 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.594314 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.594343 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:25Z","lastTransitionTime":"2025-12-10T12:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.698133 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.698176 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.698186 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.698208 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.698223 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:25Z","lastTransitionTime":"2025-12-10T12:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.801641 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.801704 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.801725 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.801750 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.801767 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:25Z","lastTransitionTime":"2025-12-10T12:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.905540 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.905593 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.905605 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.905627 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:25 crc kubenswrapper[4689]: I1210 12:16:25.905640 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:25Z","lastTransitionTime":"2025-12-10T12:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.033121 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.033169 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.033179 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.033197 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.033210 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:26Z","lastTransitionTime":"2025-12-10T12:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.135439 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.135497 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.135510 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.135533 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.135546 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:26Z","lastTransitionTime":"2025-12-10T12:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.193226 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d5s24_b434fe8e-c4c2-4979-a9b6-8561523c2d9d/ovnkube-controller/1.log" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.195912 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" event={"ID":"b434fe8e-c4c2-4979-a9b6-8561523c2d9d","Type":"ContainerStarted","Data":"4f88a12855fe2301c439492cf5773c7088e33a7d6bcca5354ebcfd8d7a64bcf9"} Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.197192 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.217863 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f310ccc9-5093-44fe-8271-32ffed1e4596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be1e74c974ed4454098265113e97e5d5d27b2ddb6ff5d89cb891c1686290f8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c34a55854008d99289ec6ff7df4ab7b8c60658569094a88e6740ceb28714d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66af8611123a29e82941af3a1facb45f06c358ffb277bd037af034ae97364d56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2e1ebfdbacde06d36ebe61a04c2977a980def50e80715de5f8a9a292e661915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2e1ebfdbacde06d36ebe61a04c2977a980def50e80715de5f8a9a292e661915\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:26Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.238130 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.238195 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.238215 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.238239 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.238257 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:26Z","lastTransitionTime":"2025-12-10T12:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.240028 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:26Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.254924 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:26Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.272253 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:26Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.288109 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400561fd4aa4adc6f0733a71b55aff1a6163a71bf34147aeeb41f95001d8222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:26Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.303264 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2h8hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcnt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcnt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2h8hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:26Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.321304 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:26Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.341243 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.341296 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.341309 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.341328 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.341342 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:26Z","lastTransitionTime":"2025-12-10T12:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.342073 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:26Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.359304 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:26Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.378757 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:26Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.402048 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6885c72d7b303bd0ad949b6b9f582f3c1a503d3ac4d1871ce164351cc5231233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:26Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.432684 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f88a12855fe2301c439492cf5773c7088e33a7d6bcca5354ebcfd8d7a64bcf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d09cb58430de4c8d23c0dc3bbe3de52bdf4b79da14b87e8ac847bc47439a5a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"message\\\":\\\"2:16:12.092815 6107 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 12:16:12.092841 6107 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 12:16:12.092842 6107 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 12:16:12.092857 6107 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1210 12:16:12.092838 6107 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1210 12:16:12.092885 6107 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1210 12:16:12.092884 6107 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 12:16:12.092870 6107 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 12:16:12.092906 6107 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 12:16:12.092930 6107 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 12:16:12.092937 6107 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 12:16:12.092944 6107 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 12:16:12.092954 6107 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 12:16:12.092964 6107 factory.go:656] Stopping watch factory\\\\nI1210 12:16:12.092962 6107 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 12:16:12.092991 6107 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:26Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.443794 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.443830 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.443838 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.443856 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.443868 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:26Z","lastTransitionTime":"2025-12-10T12:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.450241 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:26Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.467242 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7908b49b-e3d2-4d30-95e0-467f5542d445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ba576e7b07f1a070450f2a750ed7f9e1ed7b3023f13340ffadbdbbc2f564bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5081b88bdb2e39135bca6cf70283aeb3f9541d28ab7336c5fa9d17f28496f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6ffqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:26Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.489295 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:26Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.497836 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:26 crc kubenswrapper[4689]: E1210 12:16:26.498066 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.506081 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:26Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.522591 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:26Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.546497 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.546565 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.546584 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.546612 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.546632 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:26Z","lastTransitionTime":"2025-12-10T12:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.649417 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.649489 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.649507 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.649533 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.649550 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:26Z","lastTransitionTime":"2025-12-10T12:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.752639 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.752737 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.752758 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.752818 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.752837 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:26Z","lastTransitionTime":"2025-12-10T12:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.856391 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.856451 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.856470 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.856497 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.856516 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:26Z","lastTransitionTime":"2025-12-10T12:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.959611 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.959674 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.959693 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.959718 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:26 crc kubenswrapper[4689]: I1210 12:16:26.959738 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:26Z","lastTransitionTime":"2025-12-10T12:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.063222 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.063429 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.063450 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.063472 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.063490 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:27Z","lastTransitionTime":"2025-12-10T12:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.167532 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.167612 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.167630 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.167680 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.167702 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:27Z","lastTransitionTime":"2025-12-10T12:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.204384 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d5s24_b434fe8e-c4c2-4979-a9b6-8561523c2d9d/ovnkube-controller/2.log" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.205526 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d5s24_b434fe8e-c4c2-4979-a9b6-8561523c2d9d/ovnkube-controller/1.log" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.224438 4689 generic.go:334] "Generic (PLEG): container finished" podID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerID="4f88a12855fe2301c439492cf5773c7088e33a7d6bcca5354ebcfd8d7a64bcf9" exitCode=1 Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.224596 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" event={"ID":"b434fe8e-c4c2-4979-a9b6-8561523c2d9d","Type":"ContainerDied","Data":"4f88a12855fe2301c439492cf5773c7088e33a7d6bcca5354ebcfd8d7a64bcf9"} Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.224685 4689 scope.go:117] "RemoveContainer" containerID="8d09cb58430de4c8d23c0dc3bbe3de52bdf4b79da14b87e8ac847bc47439a5a9" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.227003 4689 scope.go:117] "RemoveContainer" containerID="4f88a12855fe2301c439492cf5773c7088e33a7d6bcca5354ebcfd8d7a64bcf9" Dec 10 12:16:27 crc kubenswrapper[4689]: E1210 12:16:27.227423 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-d5s24_openshift-ovn-kubernetes(b434fe8e-c4c2-4979-a9b6-8561523c2d9d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.248716 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:27Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.268494 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:27Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.271164 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.271229 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.271242 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.271268 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.271283 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:27Z","lastTransitionTime":"2025-12-10T12:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.286441 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400561fd4aa4adc6f0733a71b55aff1a6163a71bf34147aeeb41f95001d8222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:27Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.302216 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2h8hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcnt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcnt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2h8hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:27Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.320441 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f310ccc9-5093-44fe-8271-32ffed1e4596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be1e74c974ed4454098265113e97e5d5d27b2ddb6ff5d89cb891c1686290f8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c34a55854008d99289ec6ff7df4ab7b8c60658569094a88e6740ceb28714d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66af8611123a29e82941af3a1facb45f06c358ffb277bd037af034ae97364d56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2e1ebfdbacde06d36ebe61a04c2977a980def50e80715de5f8a9a292e661915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2e1ebfdbacde06d36ebe61a04c2977a980def50e80715de5f8a9a292e661915\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:27Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.339562 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:27Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.358389 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:27Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.375827 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.375901 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.375919 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.375946 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.376005 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:27Z","lastTransitionTime":"2025-12-10T12:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.381016 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:27Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.404928 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6885c72d7b303bd0ad949b6b9f582f3c1a503d3ac4d1871ce164351cc5231233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:27Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.429523 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f88a12855fe2301c439492cf5773c7088e33a7d6bcca5354ebcfd8d7a64bcf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d09cb58430de4c8d23c0dc3bbe3de52bdf4b79da14b87e8ac847bc47439a5a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"message\\\":\\\"2:16:12.092815 6107 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 12:16:12.092841 6107 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 12:16:12.092842 6107 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 12:16:12.092857 6107 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1210 12:16:12.092838 6107 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1210 12:16:12.092885 6107 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1210 12:16:12.092884 6107 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 12:16:12.092870 6107 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 12:16:12.092906 6107 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 12:16:12.092930 6107 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 12:16:12.092937 6107 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 12:16:12.092944 6107 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 12:16:12.092954 6107 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 12:16:12.092964 6107 factory.go:656] Stopping watch factory\\\\nI1210 12:16:12.092962 6107 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 12:16:12.092991 6107 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f88a12855fe2301c439492cf5773c7088e33a7d6bcca5354ebcfd8d7a64bcf9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:26Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 12:16:26.477038 6326 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 12:16:26.477070 6326 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 12:16:26.477096 6326 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 12:16:26.477127 6326 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 12:16:26.477136 6326 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 12:16:26.477156 6326 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 12:16:26.477157 6326 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 12:16:26.477180 6326 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 12:16:26.477212 6326 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1210 12:16:26.477246 6326 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 12:16:26.477297 6326 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 12:16:26.477299 6326 factory.go:656] Stopping watch factory\\\\nI1210 12:16:26.477310 6326 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1210 12:16:26.477326 6326 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 12:16:26.477355 6326 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:27Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.451662 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:27Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.473210 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:27Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.483293 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.483487 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.483517 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.483714 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.483817 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:27Z","lastTransitionTime":"2025-12-10T12:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.493043 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:27Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.497664 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.497707 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.497742 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:27 crc kubenswrapper[4689]: E1210 12:16:27.497809 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:16:27 crc kubenswrapper[4689]: E1210 12:16:27.498041 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:16:27 crc kubenswrapper[4689]: E1210 12:16:27.498123 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.510805 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:27Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.529791 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:27Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.550751 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:27Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.568010 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7908b49b-e3d2-4d30-95e0-467f5542d445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ba576e7b07f1a070450f2a750ed7f9e1ed7b3023f13340ffadbdbbc2f564bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5081b88bdb2e39135bca6cf70283aeb3f9541d28ab7336c5fa9d17f28496f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6ffqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:27Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.588212 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.588292 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.588316 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.588348 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.588375 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:27Z","lastTransitionTime":"2025-12-10T12:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.691906 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.691963 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.692015 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.692043 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.692067 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:27Z","lastTransitionTime":"2025-12-10T12:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.794265 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.794301 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.794309 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.794321 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.794356 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:27Z","lastTransitionTime":"2025-12-10T12:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.898512 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.898551 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.898563 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.898578 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:27 crc kubenswrapper[4689]: I1210 12:16:27.898591 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:27Z","lastTransitionTime":"2025-12-10T12:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.001062 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.001117 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.001128 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.001147 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.001162 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:28Z","lastTransitionTime":"2025-12-10T12:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.103928 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.104261 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.104288 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.104320 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.104345 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:28Z","lastTransitionTime":"2025-12-10T12:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.207858 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.207915 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.207929 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.207946 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.207957 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:28Z","lastTransitionTime":"2025-12-10T12:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.230348 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d5s24_b434fe8e-c4c2-4979-a9b6-8561523c2d9d/ovnkube-controller/2.log" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.235319 4689 scope.go:117] "RemoveContainer" containerID="4f88a12855fe2301c439492cf5773c7088e33a7d6bcca5354ebcfd8d7a64bcf9" Dec 10 12:16:28 crc kubenswrapper[4689]: E1210 12:16:28.235595 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-d5s24_openshift-ovn-kubernetes(b434fe8e-c4c2-4979-a9b6-8561523c2d9d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.255288 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:28Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.274387 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:28Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.299561 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6885c72d7b303bd0ad949b6b9f582f3c1a503d3ac4d1871ce164351cc5231233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:28Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.311298 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.311363 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.311400 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.311434 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.311458 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:28Z","lastTransitionTime":"2025-12-10T12:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.337264 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f88a12855fe2301c439492cf5773c7088e33a7d6bcca5354ebcfd8d7a64bcf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f88a12855fe2301c439492cf5773c7088e33a7d6bcca5354ebcfd8d7a64bcf9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:26Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 12:16:26.477038 6326 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 12:16:26.477070 6326 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 12:16:26.477096 6326 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 12:16:26.477127 6326 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 12:16:26.477136 6326 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 12:16:26.477156 6326 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 12:16:26.477157 6326 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 12:16:26.477180 6326 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 12:16:26.477212 6326 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1210 12:16:26.477246 6326 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 12:16:26.477297 6326 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 12:16:26.477299 6326 factory.go:656] Stopping watch factory\\\\nI1210 12:16:26.477310 6326 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1210 12:16:26.477326 6326 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 12:16:26.477355 6326 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-d5s24_openshift-ovn-kubernetes(b434fe8e-c4c2-4979-a9b6-8561523c2d9d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:28Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.359080 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:28Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.379642 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:28Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.399796 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:28Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.414552 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.414621 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.414648 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.414679 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.414702 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:28Z","lastTransitionTime":"2025-12-10T12:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.419633 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:28Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.437270 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7908b49b-e3d2-4d30-95e0-467f5542d445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ba576e7b07f1a070450f2a750ed7f9e1ed7b3023f13340ffadbdbbc2f564bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5081b88bdb2e39135bca6cf70283aeb3f9541d28ab7336c5fa9d17f28496f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6ffqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:28Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.456097 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:28Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.473563 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:28Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.496629 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:28Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.497956 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:28 crc kubenswrapper[4689]: E1210 12:16:28.498225 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.516344 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.516378 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.516386 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.516400 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.516410 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:28Z","lastTransitionTime":"2025-12-10T12:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.519178 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:28Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.537034 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400561fd4aa4adc6f0733a71b55aff1a6163a71bf34147aeeb41f95001d8222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:28Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.552937 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2h8hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcnt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcnt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2h8hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:28Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.570779 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f310ccc9-5093-44fe-8271-32ffed1e4596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be1e74c974ed4454098265113e97e5d5d27b2ddb6ff5d89cb891c1686290f8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c34a55854008d99289ec6ff7df4ab7b8c60658569094a88e6740ceb28714d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66af8611123a29e82941af3a1facb45f06c358ffb277bd037af034ae97364d56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2e1ebfdbacde06d36ebe61a04c2977a980def50e80715de5f8a9a292e661915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2e1ebfdbacde06d36ebe61a04c2977a980def50e80715de5f8a9a292e661915\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:28Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.591422 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:28Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.619578 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.619616 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.619629 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.619645 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.619687 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:28Z","lastTransitionTime":"2025-12-10T12:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.723015 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.723117 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.723176 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.723201 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.723258 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:28Z","lastTransitionTime":"2025-12-10T12:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.826443 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.826524 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.826541 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.826588 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.826602 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:28Z","lastTransitionTime":"2025-12-10T12:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.929881 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.930044 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.930073 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.930103 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:28 crc kubenswrapper[4689]: I1210 12:16:28.930125 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:28Z","lastTransitionTime":"2025-12-10T12:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.032712 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.032788 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.032809 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.032833 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.032851 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:29Z","lastTransitionTime":"2025-12-10T12:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.136436 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.136489 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.136500 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.136518 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.136533 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:29Z","lastTransitionTime":"2025-12-10T12:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.238447 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.238485 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.238496 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.238512 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.238523 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:29Z","lastTransitionTime":"2025-12-10T12:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.275459 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:16:29 crc kubenswrapper[4689]: E1210 12:16:29.275632 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:17:01.275595358 +0000 UTC m=+89.063676536 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.275700 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.275772 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:29 crc kubenswrapper[4689]: E1210 12:16:29.275953 4689 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 12:16:29 crc kubenswrapper[4689]: E1210 12:16:29.276016 4689 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 12:16:29 crc kubenswrapper[4689]: E1210 12:16:29.276105 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 12:17:01.276079408 +0000 UTC m=+89.064160586 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 12:16:29 crc kubenswrapper[4689]: E1210 12:16:29.276145 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 12:17:01.276128989 +0000 UTC m=+89.064210167 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.341749 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.341826 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.341844 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.341871 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.341889 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:29Z","lastTransitionTime":"2025-12-10T12:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.377303 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.377392 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:29 crc kubenswrapper[4689]: E1210 12:16:29.377585 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 12:16:29 crc kubenswrapper[4689]: E1210 12:16:29.377600 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 12:16:29 crc kubenswrapper[4689]: E1210 12:16:29.377657 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 12:16:29 crc kubenswrapper[4689]: E1210 12:16:29.377612 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 12:16:29 crc kubenswrapper[4689]: E1210 12:16:29.377681 4689 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 12:16:29 crc kubenswrapper[4689]: E1210 12:16:29.377693 4689 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 12:16:29 crc kubenswrapper[4689]: E1210 12:16:29.377856 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 12:17:01.377819202 +0000 UTC m=+89.165900400 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 12:16:29 crc kubenswrapper[4689]: E1210 12:16:29.377961 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 12:17:01.377935835 +0000 UTC m=+89.166017003 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.445375 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.445482 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.445500 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.445524 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.445543 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:29Z","lastTransitionTime":"2025-12-10T12:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.497201 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.497304 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.497317 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:29 crc kubenswrapper[4689]: E1210 12:16:29.497437 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:16:29 crc kubenswrapper[4689]: E1210 12:16:29.497602 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:16:29 crc kubenswrapper[4689]: E1210 12:16:29.497704 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.548863 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.549011 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.549031 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.549062 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.549080 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:29Z","lastTransitionTime":"2025-12-10T12:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.652422 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.652488 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.652512 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.652542 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.652566 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:29Z","lastTransitionTime":"2025-12-10T12:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.755879 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.755952 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.755963 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.755993 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.756007 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:29Z","lastTransitionTime":"2025-12-10T12:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.859257 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.859321 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.859343 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.859392 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.859415 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:29Z","lastTransitionTime":"2025-12-10T12:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.961912 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.962193 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.962212 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.962234 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:29 crc kubenswrapper[4689]: I1210 12:16:29.962250 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:29Z","lastTransitionTime":"2025-12-10T12:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.065618 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.065694 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.065712 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.065739 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.065761 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:30Z","lastTransitionTime":"2025-12-10T12:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.168576 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.168650 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.168672 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.168699 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.168722 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:30Z","lastTransitionTime":"2025-12-10T12:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.272159 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.272247 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.272279 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.272310 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.272332 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:30Z","lastTransitionTime":"2025-12-10T12:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.376117 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.376176 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.376196 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.376218 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.376274 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:30Z","lastTransitionTime":"2025-12-10T12:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.479596 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.479671 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.479687 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.479709 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.479725 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:30Z","lastTransitionTime":"2025-12-10T12:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.497242 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:30 crc kubenswrapper[4689]: E1210 12:16:30.497404 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.583097 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.583160 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.583178 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.583203 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.583221 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:30Z","lastTransitionTime":"2025-12-10T12:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.686583 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.686650 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.686668 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.686692 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.686709 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:30Z","lastTransitionTime":"2025-12-10T12:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.790412 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.790499 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.790522 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.790552 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.790573 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:30Z","lastTransitionTime":"2025-12-10T12:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.893844 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.893906 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.893925 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.893947 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:30 crc kubenswrapper[4689]: I1210 12:16:30.893964 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:30Z","lastTransitionTime":"2025-12-10T12:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:30.998653 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.001363 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.001394 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.001427 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.001450 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:31Z","lastTransitionTime":"2025-12-10T12:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.104885 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.105036 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.105066 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.105089 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.105106 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:31Z","lastTransitionTime":"2025-12-10T12:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.208312 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.208386 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.208409 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.208438 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.208461 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:31Z","lastTransitionTime":"2025-12-10T12:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.310792 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.310847 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.310862 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.310878 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.310887 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:31Z","lastTransitionTime":"2025-12-10T12:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.413584 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.413663 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.413678 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.413704 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.413719 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:31Z","lastTransitionTime":"2025-12-10T12:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.497351 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:31 crc kubenswrapper[4689]: E1210 12:16:31.497596 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.497931 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:31 crc kubenswrapper[4689]: E1210 12:16:31.498076 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.498195 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:31 crc kubenswrapper[4689]: E1210 12:16:31.498268 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.517280 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.517332 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.517344 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.517364 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.517376 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:31Z","lastTransitionTime":"2025-12-10T12:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.607445 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8-metrics-certs\") pod \"network-metrics-daemon-2h8hs\" (UID: \"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8\") " pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:31 crc kubenswrapper[4689]: E1210 12:16:31.607660 4689 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 12:16:31 crc kubenswrapper[4689]: E1210 12:16:31.607783 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8-metrics-certs podName:3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8 nodeName:}" failed. No retries permitted until 2025-12-10 12:16:47.607754475 +0000 UTC m=+75.395835653 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8-metrics-certs") pod "network-metrics-daemon-2h8hs" (UID: "3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.620136 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.620207 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.620219 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.620235 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.620246 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:31Z","lastTransitionTime":"2025-12-10T12:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.722625 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.722717 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.722730 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.722747 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.722763 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:31Z","lastTransitionTime":"2025-12-10T12:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.826325 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.826364 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.826374 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.826393 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.826405 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:31Z","lastTransitionTime":"2025-12-10T12:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.929600 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.929658 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.929677 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.929707 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:31 crc kubenswrapper[4689]: I1210 12:16:31.929725 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:31Z","lastTransitionTime":"2025-12-10T12:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.033144 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.033196 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.033208 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.033227 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.033243 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:32Z","lastTransitionTime":"2025-12-10T12:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.136662 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.136729 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.136753 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.136788 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.136812 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:32Z","lastTransitionTime":"2025-12-10T12:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.239600 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.239663 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.239680 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.239707 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.239725 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:32Z","lastTransitionTime":"2025-12-10T12:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.343143 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.343209 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.343225 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.343248 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.343265 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:32Z","lastTransitionTime":"2025-12-10T12:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.445528 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.445580 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.445593 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.445611 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.445622 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:32Z","lastTransitionTime":"2025-12-10T12:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.446773 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.446828 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.446840 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.446854 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.446864 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:32Z","lastTransitionTime":"2025-12-10T12:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:32 crc kubenswrapper[4689]: E1210 12:16:32.462319 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:32Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.466628 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.466671 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.466682 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.466698 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.466710 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:32Z","lastTransitionTime":"2025-12-10T12:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:32 crc kubenswrapper[4689]: E1210 12:16:32.481343 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:32Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.486363 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.486403 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.486412 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.486429 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.486439 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:32Z","lastTransitionTime":"2025-12-10T12:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.497797 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:32 crc kubenswrapper[4689]: E1210 12:16:32.497945 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:16:32 crc kubenswrapper[4689]: E1210 12:16:32.504735 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:32Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.508742 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.508796 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.508805 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.508818 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.508827 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:32Z","lastTransitionTime":"2025-12-10T12:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.517007 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f310ccc9-5093-44fe-8271-32ffed1e4596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be1e74c974ed4454098265113e97e5d5d27b2ddb6ff5d89cb891c1686290f8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c34a55854008d99289ec6ff7df4ab7b8c60658569094a88e6740ceb28714d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66af8611123a29e82941af3a1facb45f06c358ffb277bd037af034ae97364d56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2e1ebfdbacde06d36ebe61a04c2977a980def50e80715de5f8a9a292e661915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2e1ebfdbacde06d36ebe61a04c2977a980def50e80715de5f8a9a292e661915\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:32Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:32 crc kubenswrapper[4689]: E1210 12:16:32.526112 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:32Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.528746 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:32Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.530713 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.530761 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.530773 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.530789 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.530803 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:32Z","lastTransitionTime":"2025-12-10T12:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.540122 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:32Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:32 crc kubenswrapper[4689]: E1210 12:16:32.546849 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:32Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:32 crc kubenswrapper[4689]: E1210 12:16:32.547024 4689 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.548492 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.548570 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.548584 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.548612 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.548622 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:32Z","lastTransitionTime":"2025-12-10T12:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.553542 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:32Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.564092 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400561fd4aa4adc6f0733a71b55aff1a6163a71bf34147aeeb41f95001d8222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:32Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.575790 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2h8hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcnt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcnt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2h8hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:32Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.586674 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:32Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.597147 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:32Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.610844 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:32Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.627110 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:32Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.641790 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6885c72d7b303bd0ad949b6b9f582f3c1a503d3ac4d1871ce164351cc5231233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:32Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.650684 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.650712 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.650720 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.650733 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.650742 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:32Z","lastTransitionTime":"2025-12-10T12:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.659142 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f88a12855fe2301c439492cf5773c7088e33a7d6bcca5354ebcfd8d7a64bcf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f88a12855fe2301c439492cf5773c7088e33a7d6bcca5354ebcfd8d7a64bcf9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:26Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 12:16:26.477038 6326 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 12:16:26.477070 6326 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 12:16:26.477096 6326 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 12:16:26.477127 6326 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 12:16:26.477136 6326 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 12:16:26.477156 6326 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 12:16:26.477157 6326 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 12:16:26.477180 6326 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 12:16:26.477212 6326 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1210 12:16:26.477246 6326 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 12:16:26.477297 6326 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 12:16:26.477299 6326 factory.go:656] Stopping watch factory\\\\nI1210 12:16:26.477310 6326 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1210 12:16:26.477326 6326 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 12:16:26.477355 6326 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-d5s24_openshift-ovn-kubernetes(b434fe8e-c4c2-4979-a9b6-8561523c2d9d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:32Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.669801 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:32Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.681380 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:32Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.690718 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7908b49b-e3d2-4d30-95e0-467f5542d445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ba576e7b07f1a070450f2a750ed7f9e1ed7b3023f13340ffadbdbbc2f564bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5081b88bdb2e39135bca6cf70283aeb3f9541d28ab7336c5fa9d17f28496f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6ffqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:32Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.699800 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:32Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.707752 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:32Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.753151 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.753251 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.753261 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.753277 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.753286 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:32Z","lastTransitionTime":"2025-12-10T12:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.855826 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.855882 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.855891 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.855906 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.855915 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:32Z","lastTransitionTime":"2025-12-10T12:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.958965 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.959058 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.959075 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.959099 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:32 crc kubenswrapper[4689]: I1210 12:16:32.959171 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:32Z","lastTransitionTime":"2025-12-10T12:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.062746 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.062808 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.062831 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.062862 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.062883 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:33Z","lastTransitionTime":"2025-12-10T12:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.165533 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.165586 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.165602 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.165623 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.165638 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:33Z","lastTransitionTime":"2025-12-10T12:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.267758 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.267813 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.267828 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.267846 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.267859 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:33Z","lastTransitionTime":"2025-12-10T12:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.371007 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.371064 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.371078 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.371103 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.371118 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:33Z","lastTransitionTime":"2025-12-10T12:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.473338 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.473370 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.473380 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.473396 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.473407 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:33Z","lastTransitionTime":"2025-12-10T12:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.497146 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.497206 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:33 crc kubenswrapper[4689]: E1210 12:16:33.497250 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:16:33 crc kubenswrapper[4689]: E1210 12:16:33.497382 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.497476 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:33 crc kubenswrapper[4689]: E1210 12:16:33.497544 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.576661 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.576739 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.576750 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.576772 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.576787 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:33Z","lastTransitionTime":"2025-12-10T12:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.678943 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.678994 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.679009 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.679026 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.679037 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:33Z","lastTransitionTime":"2025-12-10T12:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.782632 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.782700 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.782712 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.782740 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.782755 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:33Z","lastTransitionTime":"2025-12-10T12:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.885726 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.885774 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.885785 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.885802 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.885813 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:33Z","lastTransitionTime":"2025-12-10T12:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.988133 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.988207 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.988220 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.988260 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:33 crc kubenswrapper[4689]: I1210 12:16:33.988274 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:33Z","lastTransitionTime":"2025-12-10T12:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.090640 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.090685 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.090695 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.090710 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.090719 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:34Z","lastTransitionTime":"2025-12-10T12:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.193359 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.193433 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.193445 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.193463 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.193474 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:34Z","lastTransitionTime":"2025-12-10T12:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.295443 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.295490 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.295499 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.295533 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.295560 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:34Z","lastTransitionTime":"2025-12-10T12:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.398397 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.398445 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.398463 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.398481 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.398492 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:34Z","lastTransitionTime":"2025-12-10T12:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.498060 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:34 crc kubenswrapper[4689]: E1210 12:16:34.498208 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.500732 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.500795 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.500809 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.500825 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.500837 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:34Z","lastTransitionTime":"2025-12-10T12:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.603963 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.604029 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.604043 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.604061 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.604076 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:34Z","lastTransitionTime":"2025-12-10T12:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.706798 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.706832 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.706840 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.706852 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.706861 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:34Z","lastTransitionTime":"2025-12-10T12:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.809241 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.809585 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.809598 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.809616 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.809630 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:34Z","lastTransitionTime":"2025-12-10T12:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.911354 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.911427 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.911444 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.911466 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:34 crc kubenswrapper[4689]: I1210 12:16:34.911485 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:34Z","lastTransitionTime":"2025-12-10T12:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.014460 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.014528 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.014540 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.014556 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.014569 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:35Z","lastTransitionTime":"2025-12-10T12:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.117106 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.117142 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.117152 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.117165 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.117175 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:35Z","lastTransitionTime":"2025-12-10T12:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.220041 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.220119 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.220131 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.220170 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.220184 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:35Z","lastTransitionTime":"2025-12-10T12:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.322612 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.322680 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.322696 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.322722 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.322739 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:35Z","lastTransitionTime":"2025-12-10T12:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.425492 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.425559 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.425581 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.425608 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.425630 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:35Z","lastTransitionTime":"2025-12-10T12:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.497550 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:35 crc kubenswrapper[4689]: E1210 12:16:35.497744 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.497583 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.497559 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:35 crc kubenswrapper[4689]: E1210 12:16:35.497859 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:16:35 crc kubenswrapper[4689]: E1210 12:16:35.497936 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.528863 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.528894 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.528905 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.528922 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.528934 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:35Z","lastTransitionTime":"2025-12-10T12:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.630995 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.631051 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.631061 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.631098 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.631112 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:35Z","lastTransitionTime":"2025-12-10T12:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.734427 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.734471 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.734484 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.734503 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.734520 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:35Z","lastTransitionTime":"2025-12-10T12:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.837365 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.837412 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.837424 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.837442 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.837454 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:35Z","lastTransitionTime":"2025-12-10T12:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.939783 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.939841 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.939853 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.939872 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:35 crc kubenswrapper[4689]: I1210 12:16:35.939884 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:35Z","lastTransitionTime":"2025-12-10T12:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.042520 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.042565 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.042578 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.042594 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.042608 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:36Z","lastTransitionTime":"2025-12-10T12:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.144670 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.144712 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.144723 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.144739 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.144750 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:36Z","lastTransitionTime":"2025-12-10T12:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.247210 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.247253 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.247264 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.247284 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.247296 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:36Z","lastTransitionTime":"2025-12-10T12:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.350141 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.350183 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.350193 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.350207 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.350220 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:36Z","lastTransitionTime":"2025-12-10T12:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.452199 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.452240 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.452250 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.452263 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.452273 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:36Z","lastTransitionTime":"2025-12-10T12:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.498025 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:36 crc kubenswrapper[4689]: E1210 12:16:36.498155 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.555004 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.555040 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.555052 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.555069 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.555083 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:36Z","lastTransitionTime":"2025-12-10T12:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.657415 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.657441 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.657451 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.657466 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.657476 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:36Z","lastTransitionTime":"2025-12-10T12:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.759066 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.759110 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.759126 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.759148 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.759165 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:36Z","lastTransitionTime":"2025-12-10T12:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.862045 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.862072 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.862080 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.862091 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.862100 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:36Z","lastTransitionTime":"2025-12-10T12:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.964769 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.964842 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.964864 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.964891 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:36 crc kubenswrapper[4689]: I1210 12:16:36.964914 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:36Z","lastTransitionTime":"2025-12-10T12:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.067308 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.067345 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.067356 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.067373 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.067385 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:37Z","lastTransitionTime":"2025-12-10T12:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.169399 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.169442 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.169456 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.169471 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.169481 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:37Z","lastTransitionTime":"2025-12-10T12:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.270590 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.270626 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.270643 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.270658 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.270668 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:37Z","lastTransitionTime":"2025-12-10T12:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.372851 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.372883 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.372894 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.372910 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.372921 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:37Z","lastTransitionTime":"2025-12-10T12:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.475728 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.475766 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.475777 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.475793 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.475803 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:37Z","lastTransitionTime":"2025-12-10T12:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.497794 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.497827 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:37 crc kubenswrapper[4689]: E1210 12:16:37.497896 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.498005 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:37 crc kubenswrapper[4689]: E1210 12:16:37.498109 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:16:37 crc kubenswrapper[4689]: E1210 12:16:37.498246 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.577873 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.577942 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.577951 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.577967 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.577994 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:37Z","lastTransitionTime":"2025-12-10T12:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.680730 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.680798 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.680815 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.680839 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.680856 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:37Z","lastTransitionTime":"2025-12-10T12:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.782894 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.782918 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.782926 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.782946 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.782955 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:37Z","lastTransitionTime":"2025-12-10T12:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.885398 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.885440 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.885452 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.885470 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.885482 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:37Z","lastTransitionTime":"2025-12-10T12:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.988066 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.988137 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.988159 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.988191 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:37 crc kubenswrapper[4689]: I1210 12:16:37.988214 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:37Z","lastTransitionTime":"2025-12-10T12:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.090861 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.090898 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.090908 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.090925 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.090937 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:38Z","lastTransitionTime":"2025-12-10T12:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.193523 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.193565 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.193587 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.193611 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.193625 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:38Z","lastTransitionTime":"2025-12-10T12:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.296215 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.296253 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.296262 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.296277 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.296288 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:38Z","lastTransitionTime":"2025-12-10T12:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.398443 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.398488 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.398501 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.398518 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.398531 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:38Z","lastTransitionTime":"2025-12-10T12:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.497309 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:38 crc kubenswrapper[4689]: E1210 12:16:38.497439 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.500587 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.500620 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.500630 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.500672 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.500682 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:38Z","lastTransitionTime":"2025-12-10T12:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.603205 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.603270 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.603286 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.603307 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.603323 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:38Z","lastTransitionTime":"2025-12-10T12:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.706167 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.706210 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.706229 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.706248 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.706260 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:38Z","lastTransitionTime":"2025-12-10T12:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.809006 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.809046 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.809054 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.809073 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.809084 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:38Z","lastTransitionTime":"2025-12-10T12:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.911686 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.911731 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.911743 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.911760 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:38 crc kubenswrapper[4689]: I1210 12:16:38.911771 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:38Z","lastTransitionTime":"2025-12-10T12:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.014101 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.014157 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.014172 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.014191 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.014203 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:39Z","lastTransitionTime":"2025-12-10T12:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.117496 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.117543 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.117554 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.117572 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.117584 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:39Z","lastTransitionTime":"2025-12-10T12:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.220422 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.220478 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.220498 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.220518 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.220535 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:39Z","lastTransitionTime":"2025-12-10T12:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.322575 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.322621 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.322632 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.322649 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.322660 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:39Z","lastTransitionTime":"2025-12-10T12:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.425300 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.425337 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.425348 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.425363 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.425374 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:39Z","lastTransitionTime":"2025-12-10T12:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.497872 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.497945 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:39 crc kubenswrapper[4689]: E1210 12:16:39.497990 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:16:39 crc kubenswrapper[4689]: E1210 12:16:39.498153 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.498215 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:39 crc kubenswrapper[4689]: E1210 12:16:39.498361 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.527263 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.527316 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.527333 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.527355 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.527371 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:39Z","lastTransitionTime":"2025-12-10T12:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.630367 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.630405 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.630415 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.630429 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.630439 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:39Z","lastTransitionTime":"2025-12-10T12:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.732695 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.732746 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.732758 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.732775 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.732787 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:39Z","lastTransitionTime":"2025-12-10T12:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.834823 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.834895 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.834905 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.834922 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.834933 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:39Z","lastTransitionTime":"2025-12-10T12:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.936773 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.936819 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.936828 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.936844 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:39 crc kubenswrapper[4689]: I1210 12:16:39.936854 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:39Z","lastTransitionTime":"2025-12-10T12:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.039497 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.039530 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.039540 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.039573 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.039585 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:40Z","lastTransitionTime":"2025-12-10T12:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.141844 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.141870 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.141878 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.141891 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.141899 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:40Z","lastTransitionTime":"2025-12-10T12:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.244607 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.244668 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.244688 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.244715 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.244734 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:40Z","lastTransitionTime":"2025-12-10T12:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.348508 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.348566 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.348584 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.348607 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.348625 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:40Z","lastTransitionTime":"2025-12-10T12:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.451133 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.451194 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.451205 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.451221 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.451232 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:40Z","lastTransitionTime":"2025-12-10T12:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.499966 4689 scope.go:117] "RemoveContainer" containerID="4f88a12855fe2301c439492cf5773c7088e33a7d6bcca5354ebcfd8d7a64bcf9" Dec 10 12:16:40 crc kubenswrapper[4689]: E1210 12:16:40.500129 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-d5s24_openshift-ovn-kubernetes(b434fe8e-c4c2-4979-a9b6-8561523c2d9d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.500273 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:40 crc kubenswrapper[4689]: E1210 12:16:40.500536 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.553895 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.553951 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.553967 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.554022 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.554040 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:40Z","lastTransitionTime":"2025-12-10T12:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.656002 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.656052 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.656068 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.656094 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.656111 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:40Z","lastTransitionTime":"2025-12-10T12:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.757829 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.757867 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.757878 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.757894 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.757906 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:40Z","lastTransitionTime":"2025-12-10T12:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.861229 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.861259 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.861267 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.861281 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.861291 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:40Z","lastTransitionTime":"2025-12-10T12:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.963156 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.963185 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.963193 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.963206 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:40 crc kubenswrapper[4689]: I1210 12:16:40.963215 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:40Z","lastTransitionTime":"2025-12-10T12:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.065848 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.065901 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.065917 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.065940 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.065956 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:41Z","lastTransitionTime":"2025-12-10T12:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.170849 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.170901 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.170912 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.170938 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.170953 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:41Z","lastTransitionTime":"2025-12-10T12:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.273644 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.273694 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.273704 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.273720 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.273733 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:41Z","lastTransitionTime":"2025-12-10T12:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.376122 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.376188 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.376207 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.376233 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.376253 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:41Z","lastTransitionTime":"2025-12-10T12:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.479345 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.479405 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.479423 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.479449 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.479466 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:41Z","lastTransitionTime":"2025-12-10T12:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.497697 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:41 crc kubenswrapper[4689]: E1210 12:16:41.497878 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.497715 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.497697 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:41 crc kubenswrapper[4689]: E1210 12:16:41.498023 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:16:41 crc kubenswrapper[4689]: E1210 12:16:41.498388 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.580813 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.580843 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.580854 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.580867 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.580878 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:41Z","lastTransitionTime":"2025-12-10T12:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.683270 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.683346 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.683368 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.683392 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.683405 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:41Z","lastTransitionTime":"2025-12-10T12:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.785958 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.786011 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.786024 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.786041 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.786052 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:41Z","lastTransitionTime":"2025-12-10T12:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.888699 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.888738 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.888754 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.888775 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.888791 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:41Z","lastTransitionTime":"2025-12-10T12:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.990889 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.990935 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.990946 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.990962 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:41 crc kubenswrapper[4689]: I1210 12:16:41.990990 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:41Z","lastTransitionTime":"2025-12-10T12:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.094265 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.094301 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.094311 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.094328 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.094337 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:42Z","lastTransitionTime":"2025-12-10T12:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.195883 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.195911 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.195919 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.195932 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.195943 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:42Z","lastTransitionTime":"2025-12-10T12:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.298844 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.298905 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.298924 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.298947 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.298964 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:42Z","lastTransitionTime":"2025-12-10T12:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.401213 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.401267 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.401283 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.401310 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.401329 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:42Z","lastTransitionTime":"2025-12-10T12:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.498141 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:42 crc kubenswrapper[4689]: E1210 12:16:42.498275 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.503881 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.503942 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.503964 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.504025 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.504048 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:42Z","lastTransitionTime":"2025-12-10T12:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.513958 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:42Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.529680 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:42Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.542716 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7908b49b-e3d2-4d30-95e0-467f5542d445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ba576e7b07f1a070450f2a750ed7f9e1ed7b3023f13340ffadbdbbc2f564bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5081b88bdb2e39135bca6cf70283aeb3f9541d28ab7336c5fa9d17f28496f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6ffqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:42Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.556858 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:42Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.568372 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:42Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.579443 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:42Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.590934 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:42Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.605301 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400561fd4aa4adc6f0733a71b55aff1a6163a71bf34147aeeb41f95001d8222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:42Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.606326 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.606372 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.606390 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.606412 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.606429 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:42Z","lastTransitionTime":"2025-12-10T12:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.615338 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2h8hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcnt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcnt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2h8hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:42Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.626797 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f310ccc9-5093-44fe-8271-32ffed1e4596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be1e74c974ed4454098265113e97e5d5d27b2ddb6ff5d89cb891c1686290f8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c34a55854008d99289ec6ff7df4ab7b8c60658569094a88e6740ceb28714d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66af8611123a29e82941af3a1facb45f06c358ffb277bd037af034ae97364d56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2e1ebfdbacde06d36ebe61a04c2977a980def50e80715de5f8a9a292e661915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2e1ebfdbacde06d36ebe61a04c2977a980def50e80715de5f8a9a292e661915\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:42Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.640738 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:42Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.654627 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:42Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.667905 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:42Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.680743 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6885c72d7b303bd0ad949b6b9f582f3c1a503d3ac4d1871ce164351cc5231233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:42Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.700269 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f88a12855fe2301c439492cf5773c7088e33a7d6bcca5354ebcfd8d7a64bcf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f88a12855fe2301c439492cf5773c7088e33a7d6bcca5354ebcfd8d7a64bcf9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:26Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 12:16:26.477038 6326 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 12:16:26.477070 6326 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 12:16:26.477096 6326 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 12:16:26.477127 6326 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 12:16:26.477136 6326 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 12:16:26.477156 6326 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 12:16:26.477157 6326 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 12:16:26.477180 6326 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 12:16:26.477212 6326 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1210 12:16:26.477246 6326 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 12:16:26.477297 6326 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 12:16:26.477299 6326 factory.go:656] Stopping watch factory\\\\nI1210 12:16:26.477310 6326 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1210 12:16:26.477326 6326 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 12:16:26.477355 6326 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-d5s24_openshift-ovn-kubernetes(b434fe8e-c4c2-4979-a9b6-8561523c2d9d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:42Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.708769 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.708800 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.708808 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.708954 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.708965 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:42Z","lastTransitionTime":"2025-12-10T12:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.713940 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:42Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.725027 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:42Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.811243 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.811292 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.811332 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.811352 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.811367 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:42Z","lastTransitionTime":"2025-12-10T12:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.868517 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.868581 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.868598 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.868622 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.868639 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:42Z","lastTransitionTime":"2025-12-10T12:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:42 crc kubenswrapper[4689]: E1210 12:16:42.885061 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:42Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.889308 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.889351 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.889363 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.889381 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.889392 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:42Z","lastTransitionTime":"2025-12-10T12:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:42 crc kubenswrapper[4689]: E1210 12:16:42.906815 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:42Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.911307 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.911617 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.911865 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.912015 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.912135 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:42Z","lastTransitionTime":"2025-12-10T12:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:42 crc kubenswrapper[4689]: E1210 12:16:42.932659 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:42Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.936386 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.936436 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.936452 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.936476 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.936494 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:42Z","lastTransitionTime":"2025-12-10T12:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:42 crc kubenswrapper[4689]: E1210 12:16:42.955636 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:42Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.961877 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.962125 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.962262 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.962425 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.962570 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:42Z","lastTransitionTime":"2025-12-10T12:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:42 crc kubenswrapper[4689]: E1210 12:16:42.981185 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:42Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:42 crc kubenswrapper[4689]: E1210 12:16:42.981410 4689 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.983485 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.983538 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.983557 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.983581 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:42 crc kubenswrapper[4689]: I1210 12:16:42.983599 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:42Z","lastTransitionTime":"2025-12-10T12:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.087163 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.087509 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.087689 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.087877 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.088129 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:43Z","lastTransitionTime":"2025-12-10T12:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.191346 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.191403 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.191594 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.191631 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.191659 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:43Z","lastTransitionTime":"2025-12-10T12:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.294767 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.294807 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.294819 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.294845 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.294857 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:43Z","lastTransitionTime":"2025-12-10T12:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.396418 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.396454 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.396485 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.396501 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.396536 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:43Z","lastTransitionTime":"2025-12-10T12:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.497905 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.497938 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.497938 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:43 crc kubenswrapper[4689]: E1210 12:16:43.498161 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:16:43 crc kubenswrapper[4689]: E1210 12:16:43.498306 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:16:43 crc kubenswrapper[4689]: E1210 12:16:43.498386 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.499701 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.499737 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.499749 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.499764 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.499775 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:43Z","lastTransitionTime":"2025-12-10T12:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.602287 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.602317 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.602326 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.602338 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.602347 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:43Z","lastTransitionTime":"2025-12-10T12:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.705021 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.705057 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.705068 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.705083 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.705096 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:43Z","lastTransitionTime":"2025-12-10T12:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.808010 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.808063 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.808071 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.808085 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.808097 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:43Z","lastTransitionTime":"2025-12-10T12:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.910593 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.910751 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.910860 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.910952 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:43 crc kubenswrapper[4689]: I1210 12:16:43.911056 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:43Z","lastTransitionTime":"2025-12-10T12:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.012683 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.012742 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.012754 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.012772 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.012785 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:44Z","lastTransitionTime":"2025-12-10T12:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.115345 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.115372 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.115383 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.115420 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.115431 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:44Z","lastTransitionTime":"2025-12-10T12:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.218432 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.218480 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.218490 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.218505 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.218516 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:44Z","lastTransitionTime":"2025-12-10T12:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.321176 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.321240 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.321258 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.321281 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.321299 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:44Z","lastTransitionTime":"2025-12-10T12:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.425019 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.425088 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.425105 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.425133 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.425151 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:44Z","lastTransitionTime":"2025-12-10T12:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.498332 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:44 crc kubenswrapper[4689]: E1210 12:16:44.498480 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.527298 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.527336 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.527347 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.527365 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.527377 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:44Z","lastTransitionTime":"2025-12-10T12:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.630857 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.630928 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.630945 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.630996 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.631014 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:44Z","lastTransitionTime":"2025-12-10T12:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.733232 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.733273 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.733283 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.733297 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.733309 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:44Z","lastTransitionTime":"2025-12-10T12:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.835571 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.835600 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.835611 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.835626 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.835637 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:44Z","lastTransitionTime":"2025-12-10T12:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.938922 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.939004 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.939020 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.939043 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:44 crc kubenswrapper[4689]: I1210 12:16:44.939059 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:44Z","lastTransitionTime":"2025-12-10T12:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.042341 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.042666 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.042735 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.042798 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.042858 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:45Z","lastTransitionTime":"2025-12-10T12:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.145751 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.146247 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.146371 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.146466 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.146570 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:45Z","lastTransitionTime":"2025-12-10T12:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.249351 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.249640 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.249709 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.249778 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.249845 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:45Z","lastTransitionTime":"2025-12-10T12:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.352924 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.353041 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.353065 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.353096 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.353118 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:45Z","lastTransitionTime":"2025-12-10T12:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.456138 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.456207 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.456225 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.456251 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.456322 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:45Z","lastTransitionTime":"2025-12-10T12:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.497485 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.497665 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.497525 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:45 crc kubenswrapper[4689]: E1210 12:16:45.497799 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:16:45 crc kubenswrapper[4689]: E1210 12:16:45.497901 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:16:45 crc kubenswrapper[4689]: E1210 12:16:45.498019 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.557924 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.557999 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.558014 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.558034 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.558052 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:45Z","lastTransitionTime":"2025-12-10T12:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.660904 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.661730 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.661919 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.662104 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.662309 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:45Z","lastTransitionTime":"2025-12-10T12:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.765801 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.765866 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.765885 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.765911 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.765929 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:45Z","lastTransitionTime":"2025-12-10T12:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.869058 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.869135 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.869152 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.869177 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.869199 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:45Z","lastTransitionTime":"2025-12-10T12:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.972092 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.972150 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.972160 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.972181 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:45 crc kubenswrapper[4689]: I1210 12:16:45.972194 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:45Z","lastTransitionTime":"2025-12-10T12:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.080426 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.080493 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.080510 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.080536 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.080554 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:46Z","lastTransitionTime":"2025-12-10T12:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.183504 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.183567 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.183585 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.183612 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.183630 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:46Z","lastTransitionTime":"2025-12-10T12:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.286330 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.286379 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.286396 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.286419 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.286434 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:46Z","lastTransitionTime":"2025-12-10T12:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.391090 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.391155 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.391182 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.391209 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.391230 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:46Z","lastTransitionTime":"2025-12-10T12:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.493946 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.494054 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.494080 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.494109 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.494130 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:46Z","lastTransitionTime":"2025-12-10T12:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.497678 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:46 crc kubenswrapper[4689]: E1210 12:16:46.497807 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.596580 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.596624 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.596640 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.596661 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.596676 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:46Z","lastTransitionTime":"2025-12-10T12:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.699207 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.699259 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.699280 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.699307 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.699328 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:46Z","lastTransitionTime":"2025-12-10T12:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.801835 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.801893 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.801910 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.801933 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.801950 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:46Z","lastTransitionTime":"2025-12-10T12:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.905345 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.905432 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.905448 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.905470 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:46 crc kubenswrapper[4689]: I1210 12:16:46.905486 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:46Z","lastTransitionTime":"2025-12-10T12:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.007410 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.007464 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.007479 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.007503 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.007520 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:47Z","lastTransitionTime":"2025-12-10T12:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.110127 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.110191 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.110212 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.110242 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.110264 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:47Z","lastTransitionTime":"2025-12-10T12:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.213054 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.213114 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.213131 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.213153 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.213173 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:47Z","lastTransitionTime":"2025-12-10T12:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.315565 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.315597 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.315605 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.315620 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.315628 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:47Z","lastTransitionTime":"2025-12-10T12:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.418747 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.418812 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.418831 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.418852 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.418869 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:47Z","lastTransitionTime":"2025-12-10T12:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.497400 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:47 crc kubenswrapper[4689]: E1210 12:16:47.497508 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.497411 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.497407 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:47 crc kubenswrapper[4689]: E1210 12:16:47.497574 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:16:47 crc kubenswrapper[4689]: E1210 12:16:47.497768 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.521102 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.521159 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.521208 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.521232 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.521250 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:47Z","lastTransitionTime":"2025-12-10T12:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.624227 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.624253 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.624261 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.624274 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.624282 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:47Z","lastTransitionTime":"2025-12-10T12:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.681355 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8-metrics-certs\") pod \"network-metrics-daemon-2h8hs\" (UID: \"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8\") " pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:47 crc kubenswrapper[4689]: E1210 12:16:47.681579 4689 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 12:16:47 crc kubenswrapper[4689]: E1210 12:16:47.681668 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8-metrics-certs podName:3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8 nodeName:}" failed. No retries permitted until 2025-12-10 12:17:19.68164108 +0000 UTC m=+107.469722258 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8-metrics-certs") pod "network-metrics-daemon-2h8hs" (UID: "3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.727643 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.727703 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.727716 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.727735 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.727746 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:47Z","lastTransitionTime":"2025-12-10T12:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.830857 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.830926 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.830944 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.831020 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.831054 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:47Z","lastTransitionTime":"2025-12-10T12:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.934037 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.934096 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.934152 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.934177 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:47 crc kubenswrapper[4689]: I1210 12:16:47.934192 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:47Z","lastTransitionTime":"2025-12-10T12:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.042206 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.042272 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.042289 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.042311 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.042329 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:48Z","lastTransitionTime":"2025-12-10T12:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.145752 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.145808 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.145826 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.145849 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.145870 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:48Z","lastTransitionTime":"2025-12-10T12:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.249110 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.249173 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.249193 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.249220 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.249237 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:48Z","lastTransitionTime":"2025-12-10T12:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.300274 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r6wmt_3713b4f8-2ee3-4078-859a-dca17076f9a6/kube-multus/0.log" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.300356 4689 generic.go:334] "Generic (PLEG): container finished" podID="3713b4f8-2ee3-4078-859a-dca17076f9a6" containerID="41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0" exitCode=1 Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.300401 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r6wmt" event={"ID":"3713b4f8-2ee3-4078-859a-dca17076f9a6","Type":"ContainerDied","Data":"41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0"} Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.301015 4689 scope.go:117] "RemoveContainer" containerID="41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.326635 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:48Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.351834 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.351896 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.351913 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.351938 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.351956 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:48Z","lastTransitionTime":"2025-12-10T12:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.354789 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:48Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.376527 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:48Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.411266 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:48Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.430253 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6885c72d7b303bd0ad949b6b9f582f3c1a503d3ac4d1871ce164351cc5231233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:48Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.456366 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.456429 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.456451 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.456481 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.456503 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:48Z","lastTransitionTime":"2025-12-10T12:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.458006 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f88a12855fe2301c439492cf5773c7088e33a7d6bcca5354ebcfd8d7a64bcf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f88a12855fe2301c439492cf5773c7088e33a7d6bcca5354ebcfd8d7a64bcf9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:26Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 12:16:26.477038 6326 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 12:16:26.477070 6326 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 12:16:26.477096 6326 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 12:16:26.477127 6326 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 12:16:26.477136 6326 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 12:16:26.477156 6326 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 12:16:26.477157 6326 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 12:16:26.477180 6326 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 12:16:26.477212 6326 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1210 12:16:26.477246 6326 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 12:16:26.477297 6326 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 12:16:26.477299 6326 factory.go:656] Stopping watch factory\\\\nI1210 12:16:26.477310 6326 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1210 12:16:26.477326 6326 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 12:16:26.477355 6326 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-d5s24_openshift-ovn-kubernetes(b434fe8e-c4c2-4979-a9b6-8561523c2d9d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:48Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.474030 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:48Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.491139 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:48Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.499288 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:48 crc kubenswrapper[4689]: E1210 12:16:48.499472 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.507965 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7908b49b-e3d2-4d30-95e0-467f5542d445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ba576e7b07f1a070450f2a750ed7f9e1ed7b3023f13340ffadbdbbc2f564bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5081b88bdb2e39135bca6cf70283aeb3f9541d28ab7336c5fa9d17f28496f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6ffqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:48Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.522911 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:48Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.536506 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:48Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.553753 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f310ccc9-5093-44fe-8271-32ffed1e4596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be1e74c974ed4454098265113e97e5d5d27b2ddb6ff5d89cb891c1686290f8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c34a55854008d99289ec6ff7df4ab7b8c60658569094a88e6740ceb28714d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66af8611123a29e82941af3a1facb45f06c358ffb277bd037af034ae97364d56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2e1ebfdbacde06d36ebe61a04c2977a980def50e80715de5f8a9a292e661915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2e1ebfdbacde06d36ebe61a04c2977a980def50e80715de5f8a9a292e661915\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:48Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.558625 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.558660 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.558669 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.558686 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.558695 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:48Z","lastTransitionTime":"2025-12-10T12:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.567801 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:48Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.584192 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:48Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.601232 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:47Z\\\",\\\"message\\\":\\\"2025-12-10T12:16:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3a34b2bc-7e9b-4e1e-875d-36e662eb6643\\\\n2025-12-10T12:16:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3a34b2bc-7e9b-4e1e-875d-36e662eb6643 to /host/opt/cni/bin/\\\\n2025-12-10T12:16:02Z [verbose] multus-daemon started\\\\n2025-12-10T12:16:02Z [verbose] Readiness Indicator file check\\\\n2025-12-10T12:16:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:48Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.610431 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400561fd4aa4adc6f0733a71b55aff1a6163a71bf34147aeeb41f95001d8222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:48Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.621301 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2h8hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcnt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcnt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2h8hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:48Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.661061 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.661111 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.661125 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.661142 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.661154 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:48Z","lastTransitionTime":"2025-12-10T12:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.763756 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.763791 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.763799 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.763812 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.763821 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:48Z","lastTransitionTime":"2025-12-10T12:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.867534 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.867574 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.867586 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.867602 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.867616 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:48Z","lastTransitionTime":"2025-12-10T12:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.970554 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.970623 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.970640 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.970668 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:48 crc kubenswrapper[4689]: I1210 12:16:48.970688 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:48Z","lastTransitionTime":"2025-12-10T12:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.074083 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.074123 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.074133 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.074150 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.074161 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:49Z","lastTransitionTime":"2025-12-10T12:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.177472 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.177553 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.177587 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.177616 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.177639 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:49Z","lastTransitionTime":"2025-12-10T12:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.280654 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.280715 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.280734 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.280761 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.280777 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:49Z","lastTransitionTime":"2025-12-10T12:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.306321 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r6wmt_3713b4f8-2ee3-4078-859a-dca17076f9a6/kube-multus/0.log" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.306375 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r6wmt" event={"ID":"3713b4f8-2ee3-4078-859a-dca17076f9a6","Type":"ContainerStarted","Data":"60b9e7f27ed577b35c0ecc9f2f205e63e25d0fc8c22e4e49e13cd395ff14a83e"} Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.332460 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:49Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.351750 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:49Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.367019 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9e7f27ed577b35c0ecc9f2f205e63e25d0fc8c22e4e49e13cd395ff14a83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:47Z\\\",\\\"message\\\":\\\"2025-12-10T12:16:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3a34b2bc-7e9b-4e1e-875d-36e662eb6643\\\\n2025-12-10T12:16:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3a34b2bc-7e9b-4e1e-875d-36e662eb6643 to /host/opt/cni/bin/\\\\n2025-12-10T12:16:02Z [verbose] multus-daemon started\\\\n2025-12-10T12:16:02Z [verbose] Readiness Indicator file check\\\\n2025-12-10T12:16:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:49Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.381010 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400561fd4aa4adc6f0733a71b55aff1a6163a71bf34147aeeb41f95001d8222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:49Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.383664 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.383722 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.383746 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.383776 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.383801 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:49Z","lastTransitionTime":"2025-12-10T12:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.391574 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2h8hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcnt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcnt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2h8hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:49Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.404489 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f310ccc9-5093-44fe-8271-32ffed1e4596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be1e74c974ed4454098265113e97e5d5d27b2ddb6ff5d89cb891c1686290f8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c34a55854008d99289ec6ff7df4ab7b8c60658569094a88e6740ceb28714d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66af8611123a29e82941af3a1facb45f06c358ffb277bd037af034ae97364d56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2e1ebfdbacde06d36ebe61a04c2977a980def50e80715de5f8a9a292e661915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2e1ebfdbacde06d36ebe61a04c2977a980def50e80715de5f8a9a292e661915\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:49Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.418426 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:49Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.437157 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:49Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.454140 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:49Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.475156 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6885c72d7b303bd0ad949b6b9f582f3c1a503d3ac4d1871ce164351cc5231233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:49Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.486663 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.486728 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.486745 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.486770 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.486790 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:49Z","lastTransitionTime":"2025-12-10T12:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.497057 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.497138 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:49 crc kubenswrapper[4689]: E1210 12:16:49.497190 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.497212 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:49 crc kubenswrapper[4689]: E1210 12:16:49.497333 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:16:49 crc kubenswrapper[4689]: E1210 12:16:49.497486 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.513788 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f88a12855fe2301c439492cf5773c7088e33a7d6bcca5354ebcfd8d7a64bcf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f88a12855fe2301c439492cf5773c7088e33a7d6bcca5354ebcfd8d7a64bcf9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:26Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 12:16:26.477038 6326 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 12:16:26.477070 6326 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 12:16:26.477096 6326 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 12:16:26.477127 6326 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 12:16:26.477136 6326 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 12:16:26.477156 6326 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 12:16:26.477157 6326 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 12:16:26.477180 6326 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 12:16:26.477212 6326 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1210 12:16:26.477246 6326 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 12:16:26.477297 6326 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 12:16:26.477299 6326 factory.go:656] Stopping watch factory\\\\nI1210 12:16:26.477310 6326 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1210 12:16:26.477326 6326 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 12:16:26.477355 6326 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-d5s24_openshift-ovn-kubernetes(b434fe8e-c4c2-4979-a9b6-8561523c2d9d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:49Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.532699 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:49Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.552896 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7908b49b-e3d2-4d30-95e0-467f5542d445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ba576e7b07f1a070450f2a750ed7f9e1ed7b3023f13340ffadbdbbc2f564bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5081b88bdb2e39135bca6cf70283aeb3f9541d28ab7336c5fa9d17f28496f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6ffqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:49Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.570959 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:49Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.583954 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:49Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.588524 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.588560 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.588571 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.588587 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.588599 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:49Z","lastTransitionTime":"2025-12-10T12:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.601299 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:49Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.613255 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:49Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.691370 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.691450 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.691467 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.691491 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.691508 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:49Z","lastTransitionTime":"2025-12-10T12:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.794152 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.794215 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.794233 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.794257 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.794276 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:49Z","lastTransitionTime":"2025-12-10T12:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.896335 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.896372 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.896384 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.896398 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.896409 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:49Z","lastTransitionTime":"2025-12-10T12:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.999601 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.999674 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:49 crc kubenswrapper[4689]: I1210 12:16:49.999699 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:49.999729 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:49.999750 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:49Z","lastTransitionTime":"2025-12-10T12:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.102925 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.103027 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.103047 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.103072 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.103089 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:50Z","lastTransitionTime":"2025-12-10T12:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.207159 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.207229 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.207246 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.207270 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.207287 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:50Z","lastTransitionTime":"2025-12-10T12:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.310512 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.310570 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.310590 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.310634 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.310662 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:50Z","lastTransitionTime":"2025-12-10T12:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.413846 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.413945 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.414045 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.414078 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.414097 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:50Z","lastTransitionTime":"2025-12-10T12:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.498120 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:50 crc kubenswrapper[4689]: E1210 12:16:50.498322 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.517259 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.517333 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.517359 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.517382 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.517400 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:50Z","lastTransitionTime":"2025-12-10T12:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.620812 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.620879 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.620896 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.620921 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.620939 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:50Z","lastTransitionTime":"2025-12-10T12:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.724782 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.724855 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.724878 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.724906 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.724926 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:50Z","lastTransitionTime":"2025-12-10T12:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.828128 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.828197 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.828216 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.828240 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.828262 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:50Z","lastTransitionTime":"2025-12-10T12:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.931469 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.931516 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.931533 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.931555 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:50 crc kubenswrapper[4689]: I1210 12:16:50.931573 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:50Z","lastTransitionTime":"2025-12-10T12:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.034784 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.034845 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.034861 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.034885 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.034903 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:51Z","lastTransitionTime":"2025-12-10T12:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.138427 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.138489 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.138514 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.138543 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.138566 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:51Z","lastTransitionTime":"2025-12-10T12:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.241923 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.242022 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.242047 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.242074 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.242093 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:51Z","lastTransitionTime":"2025-12-10T12:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.344964 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.345045 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.345068 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.345097 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.345119 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:51Z","lastTransitionTime":"2025-12-10T12:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.447522 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.447582 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.447599 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.447624 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.447644 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:51Z","lastTransitionTime":"2025-12-10T12:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.498134 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.498212 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:51 crc kubenswrapper[4689]: E1210 12:16:51.498320 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:16:51 crc kubenswrapper[4689]: E1210 12:16:51.498536 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.498700 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:51 crc kubenswrapper[4689]: E1210 12:16:51.498940 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.551344 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.551401 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.551420 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.551446 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.551465 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:51Z","lastTransitionTime":"2025-12-10T12:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.656085 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.656465 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.656606 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.656757 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.656906 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:51Z","lastTransitionTime":"2025-12-10T12:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.760167 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.760220 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.760236 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.760251 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.760263 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:51Z","lastTransitionTime":"2025-12-10T12:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.865434 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.865517 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.865531 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.865559 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.865574 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:51Z","lastTransitionTime":"2025-12-10T12:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.969596 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.969665 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.969676 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.969693 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:51 crc kubenswrapper[4689]: I1210 12:16:51.969703 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:51Z","lastTransitionTime":"2025-12-10T12:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.076546 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.076604 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.076621 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.076644 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.076662 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:52Z","lastTransitionTime":"2025-12-10T12:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.179582 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.179798 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.179811 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.179827 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.179838 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:52Z","lastTransitionTime":"2025-12-10T12:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.282749 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.282787 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.282799 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.282814 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.282826 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:52Z","lastTransitionTime":"2025-12-10T12:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.384928 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.384985 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.384998 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.385014 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.385024 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:52Z","lastTransitionTime":"2025-12-10T12:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.487049 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.487085 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.487097 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.487114 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.487125 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:52Z","lastTransitionTime":"2025-12-10T12:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.497627 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:52 crc kubenswrapper[4689]: E1210 12:16:52.497731 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.510799 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:52Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.521469 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:52Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.534821 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f310ccc9-5093-44fe-8271-32ffed1e4596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be1e74c974ed4454098265113e97e5d5d27b2ddb6ff5d89cb891c1686290f8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c34a55854008d99289ec6ff7df4ab7b8c60658569094a88e6740ceb28714d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66af8611123a29e82941af3a1facb45f06c358ffb277bd037af034ae97364d56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2e1ebfdbacde06d36ebe61a04c2977a980def50e80715de5f8a9a292e661915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2e1ebfdbacde06d36ebe61a04c2977a980def50e80715de5f8a9a292e661915\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:52Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.548902 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:52Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.592228 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:52Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.594605 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.594637 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.594648 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.594664 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.594674 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:52Z","lastTransitionTime":"2025-12-10T12:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.604543 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9e7f27ed577b35c0ecc9f2f205e63e25d0fc8c22e4e49e13cd395ff14a83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:47Z\\\",\\\"message\\\":\\\"2025-12-10T12:16:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3a34b2bc-7e9b-4e1e-875d-36e662eb6643\\\\n2025-12-10T12:16:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3a34b2bc-7e9b-4e1e-875d-36e662eb6643 to /host/opt/cni/bin/\\\\n2025-12-10T12:16:02Z [verbose] multus-daemon started\\\\n2025-12-10T12:16:02Z [verbose] Readiness Indicator file check\\\\n2025-12-10T12:16:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:52Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.615135 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400561fd4aa4adc6f0733a71b55aff1a6163a71bf34147aeeb41f95001d8222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:52Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.626485 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2h8hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcnt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcnt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2h8hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:52Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.638447 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:52Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.650917 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:52Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.664814 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:52Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.678048 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:52Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.693350 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6885c72d7b303bd0ad949b6b9f582f3c1a503d3ac4d1871ce164351cc5231233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:52Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.697278 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.697320 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.697336 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.697361 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.697379 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:52Z","lastTransitionTime":"2025-12-10T12:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.722098 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f88a12855fe2301c439492cf5773c7088e33a7d6bcca5354ebcfd8d7a64bcf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f88a12855fe2301c439492cf5773c7088e33a7d6bcca5354ebcfd8d7a64bcf9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:26Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 12:16:26.477038 6326 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 12:16:26.477070 6326 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 12:16:26.477096 6326 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 12:16:26.477127 6326 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 12:16:26.477136 6326 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 12:16:26.477156 6326 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 12:16:26.477157 6326 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 12:16:26.477180 6326 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 12:16:26.477212 6326 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1210 12:16:26.477246 6326 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 12:16:26.477297 6326 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 12:16:26.477299 6326 factory.go:656] Stopping watch factory\\\\nI1210 12:16:26.477310 6326 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1210 12:16:26.477326 6326 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 12:16:26.477355 6326 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-d5s24_openshift-ovn-kubernetes(b434fe8e-c4c2-4979-a9b6-8561523c2d9d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:52Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.737753 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:52Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.752885 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:52Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.766452 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7908b49b-e3d2-4d30-95e0-467f5542d445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ba576e7b07f1a070450f2a750ed7f9e1ed7b3023f13340ffadbdbbc2f564bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5081b88bdb2e39135bca6cf70283aeb3f9541d28ab7336c5fa9d17f28496f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6ffqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:52Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.801412 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.801478 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.801497 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.801530 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.801549 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:52Z","lastTransitionTime":"2025-12-10T12:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.905497 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.905553 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.905570 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.905593 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:52 crc kubenswrapper[4689]: I1210 12:16:52.905610 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:52Z","lastTransitionTime":"2025-12-10T12:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.007956 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.008080 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.008109 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.008143 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.008165 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:53Z","lastTransitionTime":"2025-12-10T12:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.110268 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.110305 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.110317 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.110332 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.110341 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:53Z","lastTransitionTime":"2025-12-10T12:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.213666 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.213718 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.213734 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.213750 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.213761 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:53Z","lastTransitionTime":"2025-12-10T12:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.238059 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.238141 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.238166 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.238195 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.238216 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:53Z","lastTransitionTime":"2025-12-10T12:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:53 crc kubenswrapper[4689]: E1210 12:16:53.258368 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:53Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.263171 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.263214 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.263225 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.263243 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.263255 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:53Z","lastTransitionTime":"2025-12-10T12:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:53 crc kubenswrapper[4689]: E1210 12:16:53.280559 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:53Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.283982 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.284008 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.284016 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.284030 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.284039 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:53Z","lastTransitionTime":"2025-12-10T12:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:53 crc kubenswrapper[4689]: E1210 12:16:53.296743 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:53Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.300410 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.300473 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.300494 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.300520 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.300539 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:53Z","lastTransitionTime":"2025-12-10T12:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:53 crc kubenswrapper[4689]: E1210 12:16:53.320451 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:53Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.324844 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.324884 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.324892 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.324907 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.324918 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:53Z","lastTransitionTime":"2025-12-10T12:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:53 crc kubenswrapper[4689]: E1210 12:16:53.338361 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:53Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:53 crc kubenswrapper[4689]: E1210 12:16:53.338527 4689 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.340867 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.340953 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.341014 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.341051 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.341074 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:53Z","lastTransitionTime":"2025-12-10T12:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.443423 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.443474 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.443490 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.443513 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.443530 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:53Z","lastTransitionTime":"2025-12-10T12:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.497918 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.497926 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.498073 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:53 crc kubenswrapper[4689]: E1210 12:16:53.498240 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:16:53 crc kubenswrapper[4689]: E1210 12:16:53.498351 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:16:53 crc kubenswrapper[4689]: E1210 12:16:53.498912 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.499482 4689 scope.go:117] "RemoveContainer" containerID="4f88a12855fe2301c439492cf5773c7088e33a7d6bcca5354ebcfd8d7a64bcf9" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.545482 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.545550 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.545569 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.545593 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.545610 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:53Z","lastTransitionTime":"2025-12-10T12:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.648108 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.648140 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.648151 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.648167 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.648178 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:53Z","lastTransitionTime":"2025-12-10T12:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.750813 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.750888 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.750912 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.750941 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.751003 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:53Z","lastTransitionTime":"2025-12-10T12:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.854509 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.854572 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.854593 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.854619 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.854637 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:53Z","lastTransitionTime":"2025-12-10T12:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.957956 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.958097 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.958595 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.958667 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:53 crc kubenswrapper[4689]: I1210 12:16:53.958890 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:53Z","lastTransitionTime":"2025-12-10T12:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.062586 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.062668 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.062693 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.062727 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.062752 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:54Z","lastTransitionTime":"2025-12-10T12:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.166295 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.166347 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.166363 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.166385 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.166402 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:54Z","lastTransitionTime":"2025-12-10T12:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.269086 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.269141 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.269162 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.269189 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.269211 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:54Z","lastTransitionTime":"2025-12-10T12:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.371962 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.372347 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.372480 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.372608 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.372730 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:54Z","lastTransitionTime":"2025-12-10T12:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.476082 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.476152 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.476173 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.476196 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.476213 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:54Z","lastTransitionTime":"2025-12-10T12:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.498279 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:54 crc kubenswrapper[4689]: E1210 12:16:54.498406 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.579096 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.579144 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.579160 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.579182 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.579199 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:54Z","lastTransitionTime":"2025-12-10T12:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.682151 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.682487 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.682614 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.682730 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.682810 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:54Z","lastTransitionTime":"2025-12-10T12:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.785642 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.785754 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.785775 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.785845 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.785865 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:54Z","lastTransitionTime":"2025-12-10T12:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.888616 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.888663 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.888672 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.888686 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.888695 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:54Z","lastTransitionTime":"2025-12-10T12:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.990827 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.990863 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.990872 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.990886 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:54 crc kubenswrapper[4689]: I1210 12:16:54.990895 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:54Z","lastTransitionTime":"2025-12-10T12:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.093172 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.093201 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.093208 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.093221 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.093230 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:55Z","lastTransitionTime":"2025-12-10T12:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.195291 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.195324 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.195334 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.195362 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.195375 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:55Z","lastTransitionTime":"2025-12-10T12:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.297860 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.297895 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.297924 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.297939 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.297953 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:55Z","lastTransitionTime":"2025-12-10T12:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.327255 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d5s24_b434fe8e-c4c2-4979-a9b6-8561523c2d9d/ovnkube-controller/2.log" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.329704 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" event={"ID":"b434fe8e-c4c2-4979-a9b6-8561523c2d9d","Type":"ContainerStarted","Data":"bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1"} Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.330169 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.347122 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6885c72d7b303bd0ad949b6b9f582f3c1a503d3ac4d1871ce164351cc5231233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:55Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.373199 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f88a12855fe2301c439492cf5773c7088e33a7d6bcca5354ebcfd8d7a64bcf9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:26Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 12:16:26.477038 6326 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 12:16:26.477070 6326 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 12:16:26.477096 6326 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 12:16:26.477127 6326 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 12:16:26.477136 6326 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 12:16:26.477156 6326 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 12:16:26.477157 6326 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 12:16:26.477180 6326 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 12:16:26.477212 6326 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1210 12:16:26.477246 6326 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 12:16:26.477297 6326 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 12:16:26.477299 6326 factory.go:656] Stopping watch factory\\\\nI1210 12:16:26.477310 6326 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1210 12:16:26.477326 6326 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 12:16:26.477355 6326 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:55Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.384591 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:55Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.395575 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:55Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.400220 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.400257 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.400270 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.400287 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.400301 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:55Z","lastTransitionTime":"2025-12-10T12:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.406612 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:55Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.418620 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:55Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.428872 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:55Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.444515 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:55Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.457355 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7908b49b-e3d2-4d30-95e0-467f5542d445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ba576e7b07f1a070450f2a750ed7f9e1ed7b3023f13340ffadbdbbc2f564bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5081b88bdb2e39135bca6cf70283aeb3f9541d28ab7336c5fa9d17f28496f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6ffqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:55Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.471050 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:55Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.481755 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:55Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.491432 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400561fd4aa4adc6f0733a71b55aff1a6163a71bf34147aeeb41f95001d8222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:55Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.497595 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.497645 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.497645 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:55 crc kubenswrapper[4689]: E1210 12:16:55.497735 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:16:55 crc kubenswrapper[4689]: E1210 12:16:55.497885 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:16:55 crc kubenswrapper[4689]: E1210 12:16:55.497953 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.502136 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.502165 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.502174 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.502189 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.502199 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:55Z","lastTransitionTime":"2025-12-10T12:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.502391 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2h8hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcnt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcnt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2h8hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:55Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.514236 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f310ccc9-5093-44fe-8271-32ffed1e4596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be1e74c974ed4454098265113e97e5d5d27b2ddb6ff5d89cb891c1686290f8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c34a55854008d99289ec6ff7df4ab7b8c60658569094a88e6740ceb28714d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66af8611123a29e82941af3a1facb45f06c358ffb277bd037af034ae97364d56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2e1ebfdbacde06d36ebe61a04c2977a980def50e80715de5f8a9a292e661915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2e1ebfdbacde06d36ebe61a04c2977a980def50e80715de5f8a9a292e661915\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:55Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.526724 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:55Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.537108 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:55Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.548775 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9e7f27ed577b35c0ecc9f2f205e63e25d0fc8c22e4e49e13cd395ff14a83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:47Z\\\",\\\"message\\\":\\\"2025-12-10T12:16:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3a34b2bc-7e9b-4e1e-875d-36e662eb6643\\\\n2025-12-10T12:16:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3a34b2bc-7e9b-4e1e-875d-36e662eb6643 to /host/opt/cni/bin/\\\\n2025-12-10T12:16:02Z [verbose] multus-daemon started\\\\n2025-12-10T12:16:02Z [verbose] Readiness Indicator file check\\\\n2025-12-10T12:16:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:55Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.605227 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.605265 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.605273 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.605285 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.605295 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:55Z","lastTransitionTime":"2025-12-10T12:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.707925 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.708021 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.708046 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.708072 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.708088 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:55Z","lastTransitionTime":"2025-12-10T12:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.816435 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.816634 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.816667 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.816694 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.816715 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:55Z","lastTransitionTime":"2025-12-10T12:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.920191 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.920286 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.920312 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.920340 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:55 crc kubenswrapper[4689]: I1210 12:16:55.920362 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:55Z","lastTransitionTime":"2025-12-10T12:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.022933 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.023001 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.023015 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.023034 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.023048 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:56Z","lastTransitionTime":"2025-12-10T12:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.126243 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.126315 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.126338 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.126368 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.126390 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:56Z","lastTransitionTime":"2025-12-10T12:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.229634 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.229716 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.229734 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.229757 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.229775 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:56Z","lastTransitionTime":"2025-12-10T12:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.333452 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.333510 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.333530 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.333558 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.333581 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:56Z","lastTransitionTime":"2025-12-10T12:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.336962 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d5s24_b434fe8e-c4c2-4979-a9b6-8561523c2d9d/ovnkube-controller/3.log" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.337909 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d5s24_b434fe8e-c4c2-4979-a9b6-8561523c2d9d/ovnkube-controller/2.log" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.342828 4689 generic.go:334] "Generic (PLEG): container finished" podID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerID="bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1" exitCode=1 Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.342891 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" event={"ID":"b434fe8e-c4c2-4979-a9b6-8561523c2d9d","Type":"ContainerDied","Data":"bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1"} Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.342956 4689 scope.go:117] "RemoveContainer" containerID="4f88a12855fe2301c439492cf5773c7088e33a7d6bcca5354ebcfd8d7a64bcf9" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.344087 4689 scope.go:117] "RemoveContainer" containerID="bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1" Dec 10 12:16:56 crc kubenswrapper[4689]: E1210 12:16:56.344338 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-d5s24_openshift-ovn-kubernetes(b434fe8e-c4c2-4979-a9b6-8561523c2d9d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.367487 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:56Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.386845 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:56Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.405285 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7908b49b-e3d2-4d30-95e0-467f5542d445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ba576e7b07f1a070450f2a750ed7f9e1ed7b3023f13340ffadbdbbc2f564bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5081b88bdb2e39135bca6cf70283aeb3f9541d28ab7336c5fa9d17f28496f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6ffqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:56Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.424615 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:56Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.436781 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.436872 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.436897 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.437402 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.437680 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:56Z","lastTransitionTime":"2025-12-10T12:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.441601 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:56Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.456337 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f310ccc9-5093-44fe-8271-32ffed1e4596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be1e74c974ed4454098265113e97e5d5d27b2ddb6ff5d89cb891c1686290f8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c34a55854008d99289ec6ff7df4ab7b8c60658569094a88e6740ceb28714d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66af8611123a29e82941af3a1facb45f06c358ffb277bd037af034ae97364d56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2e1ebfdbacde06d36ebe61a04c2977a980def50e80715de5f8a9a292e661915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2e1ebfdbacde06d36ebe61a04c2977a980def50e80715de5f8a9a292e661915\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:56Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.472379 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:56Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.488708 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:56Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.497356 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:56 crc kubenswrapper[4689]: E1210 12:16:56.497543 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.502013 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9e7f27ed577b35c0ecc9f2f205e63e25d0fc8c22e4e49e13cd395ff14a83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:47Z\\\",\\\"message\\\":\\\"2025-12-10T12:16:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3a34b2bc-7e9b-4e1e-875d-36e662eb6643\\\\n2025-12-10T12:16:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3a34b2bc-7e9b-4e1e-875d-36e662eb6643 to /host/opt/cni/bin/\\\\n2025-12-10T12:16:02Z [verbose] multus-daemon started\\\\n2025-12-10T12:16:02Z [verbose] Readiness Indicator file check\\\\n2025-12-10T12:16:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:56Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.514331 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400561fd4aa4adc6f0733a71b55aff1a6163a71bf34147aeeb41f95001d8222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:56Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.527949 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2h8hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcnt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcnt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2h8hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:56Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.540477 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.540508 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.540518 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.540531 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.540542 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:56Z","lastTransitionTime":"2025-12-10T12:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.548096 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:56Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.563891 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:56Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.577127 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:56Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.593795 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:56Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.613147 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6885c72d7b303bd0ad949b6b9f582f3c1a503d3ac4d1871ce164351cc5231233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:56Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.633464 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f88a12855fe2301c439492cf5773c7088e33a7d6bcca5354ebcfd8d7a64bcf9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:26Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 12:16:26.477038 6326 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 12:16:26.477070 6326 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 12:16:26.477096 6326 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 12:16:26.477127 6326 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 12:16:26.477136 6326 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 12:16:26.477156 6326 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 12:16:26.477157 6326 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 12:16:26.477180 6326 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 12:16:26.477212 6326 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1210 12:16:26.477246 6326 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 12:16:26.477297 6326 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 12:16:26.477299 6326 factory.go:656] Stopping watch factory\\\\nI1210 12:16:26.477310 6326 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1210 12:16:26.477326 6326 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 12:16:26.477355 6326 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:55Z\\\",\\\"message\\\":\\\"ent handler 7 for removal\\\\nI1210 12:16:55.463132 6753 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1210 12:16:55.463140 6753 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1210 12:16:55.463153 6753 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 12:16:55.463159 6753 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1210 12:16:55.463168 6753 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 12:16:55.463179 6753 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 12:16:55.463187 6753 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1210 12:16:55.463206 6753 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1210 12:16:55.463370 6753 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 12:16:55.463655 6753 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 12:16:55.463685 6753 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 12:16:55.463715 6753 factory.go:656] Stopping watch factory\\\\nI1210 12:16:55.463727 6753 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 12:16:55.463735 6753 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:56Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.643586 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.643637 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.643646 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.643660 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.643670 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:56Z","lastTransitionTime":"2025-12-10T12:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.746764 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.746827 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.746844 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.746870 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.746888 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:56Z","lastTransitionTime":"2025-12-10T12:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.850244 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.850326 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.850344 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.850372 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.850391 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:56Z","lastTransitionTime":"2025-12-10T12:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.953488 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.953553 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.953573 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.953603 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:56 crc kubenswrapper[4689]: I1210 12:16:56.953626 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:56Z","lastTransitionTime":"2025-12-10T12:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.057364 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.057439 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.057456 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.057480 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.057496 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:57Z","lastTransitionTime":"2025-12-10T12:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.161142 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.161219 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.161240 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.161274 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.161295 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:57Z","lastTransitionTime":"2025-12-10T12:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.264037 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.264116 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.264140 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.264170 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.264193 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:57Z","lastTransitionTime":"2025-12-10T12:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.350394 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d5s24_b434fe8e-c4c2-4979-a9b6-8561523c2d9d/ovnkube-controller/3.log" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.361286 4689 scope.go:117] "RemoveContainer" containerID="bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1" Dec 10 12:16:57 crc kubenswrapper[4689]: E1210 12:16:57.361540 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-d5s24_openshift-ovn-kubernetes(b434fe8e-c4c2-4979-a9b6-8561523c2d9d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.367073 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.367151 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.367162 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.367180 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.367193 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:57Z","lastTransitionTime":"2025-12-10T12:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.379131 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f310ccc9-5093-44fe-8271-32ffed1e4596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be1e74c974ed4454098265113e97e5d5d27b2ddb6ff5d89cb891c1686290f8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c34a55854008d99289ec6ff7df4ab7b8c60658569094a88e6740ceb28714d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66af8611123a29e82941af3a1facb45f06c358ffb277bd037af034ae97364d56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2e1ebfdbacde06d36ebe61a04c2977a980def50e80715de5f8a9a292e661915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2e1ebfdbacde06d36ebe61a04c2977a980def50e80715de5f8a9a292e661915\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:57Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.399752 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:57Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.418292 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:57Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.437293 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9e7f27ed577b35c0ecc9f2f205e63e25d0fc8c22e4e49e13cd395ff14a83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:47Z\\\",\\\"message\\\":\\\"2025-12-10T12:16:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3a34b2bc-7e9b-4e1e-875d-36e662eb6643\\\\n2025-12-10T12:16:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3a34b2bc-7e9b-4e1e-875d-36e662eb6643 to /host/opt/cni/bin/\\\\n2025-12-10T12:16:02Z [verbose] multus-daemon started\\\\n2025-12-10T12:16:02Z [verbose] Readiness Indicator file check\\\\n2025-12-10T12:16:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:57Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.452588 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400561fd4aa4adc6f0733a71b55aff1a6163a71bf34147aeeb41f95001d8222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:57Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.469257 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2h8hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcnt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcnt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2h8hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:57Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.470421 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.470492 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.470514 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.470545 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.470634 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:57Z","lastTransitionTime":"2025-12-10T12:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.493467 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:57Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.497202 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.497224 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.497210 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:57 crc kubenswrapper[4689]: E1210 12:16:57.497368 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:16:57 crc kubenswrapper[4689]: E1210 12:16:57.497522 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:16:57 crc kubenswrapper[4689]: E1210 12:16:57.497694 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.511910 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:57Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.532562 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:57Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.544748 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:57Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.569145 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6885c72d7b303bd0ad949b6b9f582f3c1a503d3ac4d1871ce164351cc5231233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:57Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.574466 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.574511 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.574523 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.574542 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.574556 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:57Z","lastTransitionTime":"2025-12-10T12:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.590349 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:55Z\\\",\\\"message\\\":\\\"ent handler 7 for removal\\\\nI1210 12:16:55.463132 6753 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1210 12:16:55.463140 6753 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1210 12:16:55.463153 6753 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 12:16:55.463159 6753 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1210 12:16:55.463168 6753 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 12:16:55.463179 6753 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 12:16:55.463187 6753 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1210 12:16:55.463206 6753 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1210 12:16:55.463370 6753 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 12:16:55.463655 6753 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 12:16:55.463685 6753 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 12:16:55.463715 6753 factory.go:656] Stopping watch factory\\\\nI1210 12:16:55.463727 6753 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 12:16:55.463735 6753 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-d5s24_openshift-ovn-kubernetes(b434fe8e-c4c2-4979-a9b6-8561523c2d9d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:57Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.610770 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:57Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.629398 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7908b49b-e3d2-4d30-95e0-467f5542d445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ba576e7b07f1a070450f2a750ed7f9e1ed7b3023f13340ffadbdbbc2f564bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5081b88bdb2e39135bca6cf70283aeb3f9541d28ab7336c5fa9d17f28496f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6ffqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:57Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.657757 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:57Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.670568 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:57Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.677368 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.677418 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.677433 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.677457 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.677474 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:57Z","lastTransitionTime":"2025-12-10T12:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.683830 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:16:57Z is after 2025-08-24T17:21:41Z" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.779886 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.779935 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.779950 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.780009 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.780027 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:57Z","lastTransitionTime":"2025-12-10T12:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.882673 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.882722 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.882741 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.882762 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.882777 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:57Z","lastTransitionTime":"2025-12-10T12:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.986309 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.986358 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.986369 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.986392 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:57 crc kubenswrapper[4689]: I1210 12:16:57.986410 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:57Z","lastTransitionTime":"2025-12-10T12:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.089912 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.090030 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.090050 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.090076 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.090094 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:58Z","lastTransitionTime":"2025-12-10T12:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.192862 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.192906 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.192916 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.192933 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.192943 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:58Z","lastTransitionTime":"2025-12-10T12:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.296226 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.296256 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.296266 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.296279 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.296289 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:58Z","lastTransitionTime":"2025-12-10T12:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.399063 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.399184 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.399196 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.399216 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.399229 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:58Z","lastTransitionTime":"2025-12-10T12:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.497331 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:16:58 crc kubenswrapper[4689]: E1210 12:16:58.497565 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.501905 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.501951 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.501967 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.502004 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.502016 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:58Z","lastTransitionTime":"2025-12-10T12:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.605137 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.605197 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.605208 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.605230 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.605248 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:58Z","lastTransitionTime":"2025-12-10T12:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.707748 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.707807 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.707823 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.707845 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.707861 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:58Z","lastTransitionTime":"2025-12-10T12:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.810220 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.810275 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.810287 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.810301 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.810311 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:58Z","lastTransitionTime":"2025-12-10T12:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.912940 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.913008 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.913023 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.913042 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:58 crc kubenswrapper[4689]: I1210 12:16:58.913059 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:58Z","lastTransitionTime":"2025-12-10T12:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.015699 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.016025 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.016035 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.016050 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.016060 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:59Z","lastTransitionTime":"2025-12-10T12:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.118998 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.119043 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.119054 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.119069 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.119082 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:59Z","lastTransitionTime":"2025-12-10T12:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.221549 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.221594 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.221601 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.221614 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.221622 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:59Z","lastTransitionTime":"2025-12-10T12:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.323965 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.324023 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.324032 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.324049 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.324058 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:59Z","lastTransitionTime":"2025-12-10T12:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.427312 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.427367 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.427386 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.427409 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.427428 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:59Z","lastTransitionTime":"2025-12-10T12:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.497557 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.497641 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:16:59 crc kubenswrapper[4689]: E1210 12:16:59.497732 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.497663 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:16:59 crc kubenswrapper[4689]: E1210 12:16:59.497830 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:16:59 crc kubenswrapper[4689]: E1210 12:16:59.497936 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.529506 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.529546 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.529555 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.529568 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.529577 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:59Z","lastTransitionTime":"2025-12-10T12:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.632847 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.632908 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.632924 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.632949 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.632966 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:59Z","lastTransitionTime":"2025-12-10T12:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.735389 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.735438 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.735461 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.735479 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.735490 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:59Z","lastTransitionTime":"2025-12-10T12:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.838136 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.838166 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.838175 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.838187 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.838195 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:59Z","lastTransitionTime":"2025-12-10T12:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.941516 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.941562 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.941577 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.941596 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:16:59 crc kubenswrapper[4689]: I1210 12:16:59.941608 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:16:59Z","lastTransitionTime":"2025-12-10T12:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.044306 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.044355 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.044366 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.044383 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.044396 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:00Z","lastTransitionTime":"2025-12-10T12:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.146633 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.146664 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.146673 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.146689 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.146699 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:00Z","lastTransitionTime":"2025-12-10T12:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.249092 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.249131 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.249142 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.249157 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.249166 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:00Z","lastTransitionTime":"2025-12-10T12:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.351398 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.351435 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.351446 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.351461 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.351472 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:00Z","lastTransitionTime":"2025-12-10T12:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.453340 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.453374 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.453383 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.453397 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.453407 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:00Z","lastTransitionTime":"2025-12-10T12:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.498084 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:17:00 crc kubenswrapper[4689]: E1210 12:17:00.498204 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.508560 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.555341 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.555408 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.555428 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.555454 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.555472 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:00Z","lastTransitionTime":"2025-12-10T12:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.657792 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.657833 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.657847 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.657866 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.657882 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:00Z","lastTransitionTime":"2025-12-10T12:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.760868 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.760907 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.760918 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.760933 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.760944 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:00Z","lastTransitionTime":"2025-12-10T12:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.863356 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.863407 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.863423 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.863444 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.863480 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:00Z","lastTransitionTime":"2025-12-10T12:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.966237 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.966269 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.966279 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.966295 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:00 crc kubenswrapper[4689]: I1210 12:17:00.966307 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:00Z","lastTransitionTime":"2025-12-10T12:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.069299 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.069355 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.069371 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.069392 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.069410 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:01Z","lastTransitionTime":"2025-12-10T12:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.172112 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.172156 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.172167 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.172182 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.172197 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:01Z","lastTransitionTime":"2025-12-10T12:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.275055 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.275135 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.275160 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.275193 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.275217 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:01Z","lastTransitionTime":"2025-12-10T12:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.327677 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:17:01 crc kubenswrapper[4689]: E1210 12:17:01.327863 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:18:05.32783673 +0000 UTC m=+153.115917878 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.327942 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.328009 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:17:01 crc kubenswrapper[4689]: E1210 12:17:01.328140 4689 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 12:17:01 crc kubenswrapper[4689]: E1210 12:17:01.328190 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 12:18:05.328180318 +0000 UTC m=+153.116261466 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 12:17:01 crc kubenswrapper[4689]: E1210 12:17:01.328193 4689 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 12:17:01 crc kubenswrapper[4689]: E1210 12:17:01.328292 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 12:18:05.3282646 +0000 UTC m=+153.116345768 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.378166 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.378201 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.378213 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.378229 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.378241 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:01Z","lastTransitionTime":"2025-12-10T12:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.428575 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.428662 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:17:01 crc kubenswrapper[4689]: E1210 12:17:01.428790 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 12:17:01 crc kubenswrapper[4689]: E1210 12:17:01.428824 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 12:17:01 crc kubenswrapper[4689]: E1210 12:17:01.428844 4689 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 12:17:01 crc kubenswrapper[4689]: E1210 12:17:01.428793 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 12:17:01 crc kubenswrapper[4689]: E1210 12:17:01.428938 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 12:17:01 crc kubenswrapper[4689]: E1210 12:17:01.428957 4689 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 12:17:01 crc kubenswrapper[4689]: E1210 12:17:01.429025 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 12:18:05.428999019 +0000 UTC m=+153.217080197 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 12:17:01 crc kubenswrapper[4689]: E1210 12:17:01.429056 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 12:18:05.4290448 +0000 UTC m=+153.217125978 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.482711 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.482784 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.482802 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.482836 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.482860 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:01Z","lastTransitionTime":"2025-12-10T12:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.497475 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.497586 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:17:01 crc kubenswrapper[4689]: E1210 12:17:01.497604 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:17:01 crc kubenswrapper[4689]: E1210 12:17:01.497789 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.497908 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:17:01 crc kubenswrapper[4689]: E1210 12:17:01.498238 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.585532 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.585580 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.585591 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.585608 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.585620 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:01Z","lastTransitionTime":"2025-12-10T12:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.688764 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.688808 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.688821 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.688839 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.688851 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:01Z","lastTransitionTime":"2025-12-10T12:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.790953 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.791278 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.791385 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.791530 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.791636 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:01Z","lastTransitionTime":"2025-12-10T12:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.893926 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.894009 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.894023 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.894044 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.894059 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:01Z","lastTransitionTime":"2025-12-10T12:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.997584 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.997626 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.997641 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.997658 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:01 crc kubenswrapper[4689]: I1210 12:17:01.997671 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:01Z","lastTransitionTime":"2025-12-10T12:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.100465 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.100546 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.100564 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.100588 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.100606 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:02Z","lastTransitionTime":"2025-12-10T12:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.203616 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.203692 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.203711 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.203735 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.203752 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:02Z","lastTransitionTime":"2025-12-10T12:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.306471 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.306525 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.306539 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.306558 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.306572 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:02Z","lastTransitionTime":"2025-12-10T12:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.408939 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.409013 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.409029 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.409049 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.409065 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:02Z","lastTransitionTime":"2025-12-10T12:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.497481 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:17:02 crc kubenswrapper[4689]: E1210 12:17:02.497589 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.513702 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.513774 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.513796 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.513825 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.513852 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:02Z","lastTransitionTime":"2025-12-10T12:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.513721 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06f8726511d5ea564b457eeb6de5b75275d78ee946595d8fbe37e575f3a1b4ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d2e122b3f677f755d79b93cd0abed60360a7b58bd685fbc229157c3358bbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:17:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.538262 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:17:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.555466 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7908b49b-e3d2-4d30-95e0-467f5542d445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ba576e7b07f1a070450f2a750ed7f9e1ed7b3023f13340ffadbdbbc2f564bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5081b88bdb2e39135bca6cf70283aeb3f9541d28ab7336c5fa9d17f28496f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8t2fq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6ffqh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:17:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.567619 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec49dd709fb88fdd4a831be61a82471be37ad10b0dca1cb0e34468ef5e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:17:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.577882 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-77s7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f25cd73-d88f-4d52-93b6-483589dc4ac4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffa3e3c20704e59130d99f82bf7c73982dadfcd06b8b666265324aeb9115ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwnnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-77s7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:17:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.590773 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f310ccc9-5093-44fe-8271-32ffed1e4596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be1e74c974ed4454098265113e97e5d5d27b2ddb6ff5d89cb891c1686290f8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c34a55854008d99289ec6ff7df4ab7b8c60658569094a88e6740ceb28714d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66af8611123a29e82941af3a1facb45f06c358ffb277bd037af034ae97364d56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2e1ebfdbacde06d36ebe61a04c2977a980def50e80715de5f8a9a292e661915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2e1ebfdbacde06d36ebe61a04c2977a980def50e80715de5f8a9a292e661915\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:17:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.605000 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef7248d1db79267842514a13623e9e08f8e412e4af33a70b4fceafe7a9e43392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:17:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.619193 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:17:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.619407 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.619469 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.619486 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.619515 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.619535 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:02Z","lastTransitionTime":"2025-12-10T12:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.632886 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r6wmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3713b4f8-2ee3-4078-859a-dca17076f9a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9e7f27ed577b35c0ecc9f2f205e63e25d0fc8c22e4e49e13cd395ff14a83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:47Z\\\",\\\"message\\\":\\\"2025-12-10T12:16:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3a34b2bc-7e9b-4e1e-875d-36e662eb6643\\\\n2025-12-10T12:16:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3a34b2bc-7e9b-4e1e-875d-36e662eb6643 to /host/opt/cni/bin/\\\\n2025-12-10T12:16:02Z [verbose] multus-daemon started\\\\n2025-12-10T12:16:02Z [verbose] Readiness Indicator file check\\\\n2025-12-10T12:16:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmrn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r6wmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:17:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.643187 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dk9hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aff4b91-a4af-46bd-93ba-a7c1356fc498\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400561fd4aa4adc6f0733a71b55aff1a6163a71bf34147aeeb41f95001d8222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c2rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dk9hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:17:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.653736 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2h8hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcnt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcnt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2h8hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:17:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.665026 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ade5d59-9027-4fd3-b3b1-d69c2554ce3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ccc2bdf4b011cefbaad9e5fd43836a5009c5c9cc28b9079ba096473056d27a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://145b8baafed22a37ad3fbb0e4ab523fec92c697af35e9b7b35f0c18055f2adee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://145b8baafed22a37ad3fbb0e4ab523fec92c697af35e9b7b35f0c18055f2adee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:17:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.678954 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"948e2421-6bdf-45d9-b484-3d7cfcbeff5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 12:15:46.775770 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 12:15:46.777515 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-772356655/tls.crt::/tmp/serving-cert-772356655/tls.key\\\\\\\"\\\\nI1210 12:15:55.300683 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 12:15:55.311834 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 12:15:55.311870 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 12:15:55.311907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 12:15:55.311937 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 12:15:55.334823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 12:15:55.334877 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 12:15:55.334913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 12:15:55.334923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 12:15:55.334932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 12:15:55.334948 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 12:15:55.335346 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 12:15:55.337241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:17:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.692236 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18bf8be2-3022-4c8b-a062-e12ddac113a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44524b486bd4c6b69eb1219605d0545eaa129605484dfbca87ca741ce71c2991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4addfb661bd4ba43817e6e5db29f30589839e5afe46a8f271e43c98920efdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a68967b286ccc9c486e05b03201574bfa811ee68b0cbcbb2b0b1a379f5c1bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:15:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:17:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.705364 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T12:15:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:17:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.716949 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a41ebdcd-910f-4669-992d-296e1a92162b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21eff13bf08e5631078b18a6e63eea07f7ae0881f4d340b92273d47e443fc71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68btn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-db6zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:17:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.722022 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.722051 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.722060 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.722075 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.722089 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:02Z","lastTransitionTime":"2025-12-10T12:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.730695 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf732b59-88ab-4673-9d0c-e479b9138b30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6885c72d7b303bd0ad949b6b9f582f3c1a503d3ac4d1871ce164351cc5231233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b32c1925876e067185b141b3e107d582b5ae3b422965cded935fc67935907244\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56edfde6c25c67ed13c859224563601ca60ff86122c0ccfd0eafd009f56c2c6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6972bdb1edc7d0216e984f43dd93319f5fbe91dbbf251fddcab234566c12694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ed0a4af61a7d20d96a8628e34b3b07cf93fb88bb8c67a2db58c140977d4b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a00343d8419f3bd26ca5eebc37892fa34c893db822a472b2d16b79b9a886130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae1b0abfe7d118fce3906b0fb8609b89bb5a7d9a72e77c989248a37fbcaca589\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jq2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k7kbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:17:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.752257 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T12:16:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T12:16:55Z\\\",\\\"message\\\":\\\"ent handler 7 for removal\\\\nI1210 12:16:55.463132 6753 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1210 12:16:55.463140 6753 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1210 12:16:55.463153 6753 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 12:16:55.463159 6753 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1210 12:16:55.463168 6753 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 12:16:55.463179 6753 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 12:16:55.463187 6753 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1210 12:16:55.463206 6753 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1210 12:16:55.463370 6753 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 12:16:55.463655 6753 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 12:16:55.463685 6753 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 12:16:55.463715 6753 factory.go:656] Stopping watch factory\\\\nI1210 12:16:55.463727 6753 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 12:16:55.463735 6753 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-d5s24_openshift-ovn-kubernetes(b434fe8e-c4c2-4979-a9b6-8561523c2d9d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T12:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T12:16:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T12:16:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgcnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T12:16:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d5s24\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:17:02Z is after 2025-08-24T17:21:41Z" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.824846 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.824882 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.824890 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.824903 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.824911 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:02Z","lastTransitionTime":"2025-12-10T12:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.927937 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.927997 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.928010 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.928028 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:02 crc kubenswrapper[4689]: I1210 12:17:02.928040 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:02Z","lastTransitionTime":"2025-12-10T12:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.030875 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.030934 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.030948 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.030984 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.030997 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:03Z","lastTransitionTime":"2025-12-10T12:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.133611 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.133658 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.133671 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.133687 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.133699 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:03Z","lastTransitionTime":"2025-12-10T12:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.241240 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.241410 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.241520 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.241559 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.241896 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:03Z","lastTransitionTime":"2025-12-10T12:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.346041 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.346089 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.346106 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.346129 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.346147 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:03Z","lastTransitionTime":"2025-12-10T12:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.450935 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.451023 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.451040 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.451066 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.451084 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:03Z","lastTransitionTime":"2025-12-10T12:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.497476 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.497542 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.497615 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:17:03 crc kubenswrapper[4689]: E1210 12:17:03.497641 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:17:03 crc kubenswrapper[4689]: E1210 12:17:03.497824 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:17:03 crc kubenswrapper[4689]: E1210 12:17:03.497886 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.554499 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.554546 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.554558 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.554578 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.554593 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:03Z","lastTransitionTime":"2025-12-10T12:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.658027 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.658101 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.658128 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.658161 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.658182 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:03Z","lastTransitionTime":"2025-12-10T12:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.714503 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.714588 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.714608 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.714640 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.714672 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:03Z","lastTransitionTime":"2025-12-10T12:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:03 crc kubenswrapper[4689]: E1210 12:17:03.736763 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:17:03Z is after 2025-08-24T17:21:41Z" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.743172 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.743214 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.743226 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.743243 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.743256 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:03Z","lastTransitionTime":"2025-12-10T12:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:03 crc kubenswrapper[4689]: E1210 12:17:03.763961 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:17:03Z is after 2025-08-24T17:21:41Z" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.769965 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.770030 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.770043 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.770061 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.770072 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:03Z","lastTransitionTime":"2025-12-10T12:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:03 crc kubenswrapper[4689]: E1210 12:17:03.791841 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:17:03Z is after 2025-08-24T17:21:41Z" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.797675 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.797763 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.797777 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.797810 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.797825 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:03Z","lastTransitionTime":"2025-12-10T12:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:03 crc kubenswrapper[4689]: E1210 12:17:03.813574 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:17:03Z is after 2025-08-24T17:21:41Z" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.820447 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.820504 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.820523 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.820550 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.820565 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:03Z","lastTransitionTime":"2025-12-10T12:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:03 crc kubenswrapper[4689]: E1210 12:17:03.836050 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T12:17:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"488f3058-98a0-4e39-b016-529d0c992401\\\",\\\"systemUUID\\\":\\\"41a6821f-a04f-4a5d-a8e5-790b0745d6fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T12:17:03Z is after 2025-08-24T17:21:41Z" Dec 10 12:17:03 crc kubenswrapper[4689]: E1210 12:17:03.836218 4689 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.838332 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.838371 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.838384 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.838404 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.838418 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:03Z","lastTransitionTime":"2025-12-10T12:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.941108 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.941169 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.941184 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.941203 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:03 crc kubenswrapper[4689]: I1210 12:17:03.941220 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:03Z","lastTransitionTime":"2025-12-10T12:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.043529 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.043571 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.043582 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.043598 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.043609 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:04Z","lastTransitionTime":"2025-12-10T12:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.146559 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.146648 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.146680 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.146711 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.146732 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:04Z","lastTransitionTime":"2025-12-10T12:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.249470 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.249530 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.249547 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.249570 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.249589 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:04Z","lastTransitionTime":"2025-12-10T12:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.352059 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.352147 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.352173 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.352203 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.352225 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:04Z","lastTransitionTime":"2025-12-10T12:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.457807 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.457859 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.457871 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.457891 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.457907 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:04Z","lastTransitionTime":"2025-12-10T12:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.497316 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:17:04 crc kubenswrapper[4689]: E1210 12:17:04.497754 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.560726 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.560825 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.560838 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.560855 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.560867 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:04Z","lastTransitionTime":"2025-12-10T12:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.663369 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.663424 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.663438 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.663458 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.663473 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:04Z","lastTransitionTime":"2025-12-10T12:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.765603 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.765655 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.765670 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.765689 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.765705 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:04Z","lastTransitionTime":"2025-12-10T12:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.868946 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.869059 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.869082 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.869110 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.869132 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:04Z","lastTransitionTime":"2025-12-10T12:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.971867 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.971925 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.971942 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.971964 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:04 crc kubenswrapper[4689]: I1210 12:17:04.972021 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:04Z","lastTransitionTime":"2025-12-10T12:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.075838 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.075898 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.075915 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.075938 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.075954 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:05Z","lastTransitionTime":"2025-12-10T12:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.179090 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.179167 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.179194 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.179223 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.179247 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:05Z","lastTransitionTime":"2025-12-10T12:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.281459 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.281499 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.281518 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.281537 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.281551 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:05Z","lastTransitionTime":"2025-12-10T12:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.382965 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.383074 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.383086 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.383101 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.383112 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:05Z","lastTransitionTime":"2025-12-10T12:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.486194 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.486232 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.486242 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.486258 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.486269 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:05Z","lastTransitionTime":"2025-12-10T12:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.497770 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.497838 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.497783 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:17:05 crc kubenswrapper[4689]: E1210 12:17:05.498028 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:17:05 crc kubenswrapper[4689]: E1210 12:17:05.498214 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:17:05 crc kubenswrapper[4689]: E1210 12:17:05.498355 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.589868 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.589927 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.589945 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.590014 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.590040 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:05Z","lastTransitionTime":"2025-12-10T12:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.693238 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.693307 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.693331 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.693360 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.693382 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:05Z","lastTransitionTime":"2025-12-10T12:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.795901 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.795929 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.795941 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.795957 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.795987 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:05Z","lastTransitionTime":"2025-12-10T12:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.898015 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.898243 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.898328 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.898447 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:05 crc kubenswrapper[4689]: I1210 12:17:05.898548 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:05Z","lastTransitionTime":"2025-12-10T12:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.000945 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.001005 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.001016 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.001033 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.001044 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:06Z","lastTransitionTime":"2025-12-10T12:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.104694 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.104742 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.104758 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.104798 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.104815 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:06Z","lastTransitionTime":"2025-12-10T12:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.206461 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.206506 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.206522 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.206543 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.206560 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:06Z","lastTransitionTime":"2025-12-10T12:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.309025 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.309099 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.309120 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.309150 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.309166 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:06Z","lastTransitionTime":"2025-12-10T12:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.412320 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.412375 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.412392 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.412414 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.412433 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:06Z","lastTransitionTime":"2025-12-10T12:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.498002 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:17:06 crc kubenswrapper[4689]: E1210 12:17:06.498181 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.515012 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.515087 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.515111 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.515140 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.515163 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:06Z","lastTransitionTime":"2025-12-10T12:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.618371 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.618425 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.618440 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.618457 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.618472 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:06Z","lastTransitionTime":"2025-12-10T12:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.721393 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.721452 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.721472 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.721501 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.721522 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:06Z","lastTransitionTime":"2025-12-10T12:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.824219 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.824262 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.824275 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.824294 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.824307 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:06Z","lastTransitionTime":"2025-12-10T12:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.926467 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.926522 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.926539 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.926559 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:06 crc kubenswrapper[4689]: I1210 12:17:06.926575 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:06Z","lastTransitionTime":"2025-12-10T12:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.030855 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.030884 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.030895 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.030912 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.030922 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:07Z","lastTransitionTime":"2025-12-10T12:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.133818 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.133928 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.133949 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.133989 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.134004 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:07Z","lastTransitionTime":"2025-12-10T12:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.236631 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.236671 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.236682 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.236697 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.236709 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:07Z","lastTransitionTime":"2025-12-10T12:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.339618 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.339659 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.339670 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.339686 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.339696 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:07Z","lastTransitionTime":"2025-12-10T12:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.443571 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.444081 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.444140 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.444165 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.444177 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:07Z","lastTransitionTime":"2025-12-10T12:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.497788 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.497870 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.497908 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:17:07 crc kubenswrapper[4689]: E1210 12:17:07.498102 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:17:07 crc kubenswrapper[4689]: E1210 12:17:07.498201 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:17:07 crc kubenswrapper[4689]: E1210 12:17:07.498301 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.547250 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.547334 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.547358 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.547387 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.547404 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:07Z","lastTransitionTime":"2025-12-10T12:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.650676 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.650758 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.650781 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.650811 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.650833 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:07Z","lastTransitionTime":"2025-12-10T12:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.753636 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.753693 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.753702 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.753719 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.753733 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:07Z","lastTransitionTime":"2025-12-10T12:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.856658 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.856717 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.856738 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.856766 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.856785 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:07Z","lastTransitionTime":"2025-12-10T12:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.960064 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.960141 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.960160 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.960184 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:07 crc kubenswrapper[4689]: I1210 12:17:07.960202 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:07Z","lastTransitionTime":"2025-12-10T12:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.062548 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.062610 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.062632 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.062660 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.062682 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:08Z","lastTransitionTime":"2025-12-10T12:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.165810 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.165861 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.165873 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.165891 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.165903 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:08Z","lastTransitionTime":"2025-12-10T12:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.269394 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.269436 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.269444 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.269464 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.269473 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:08Z","lastTransitionTime":"2025-12-10T12:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.372133 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.372170 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.372181 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.372200 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.372211 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:08Z","lastTransitionTime":"2025-12-10T12:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.475015 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.475061 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.475078 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.475099 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.475114 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:08Z","lastTransitionTime":"2025-12-10T12:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.498139 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:17:08 crc kubenswrapper[4689]: E1210 12:17:08.498488 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.499281 4689 scope.go:117] "RemoveContainer" containerID="bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1" Dec 10 12:17:08 crc kubenswrapper[4689]: E1210 12:17:08.499520 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-d5s24_openshift-ovn-kubernetes(b434fe8e-c4c2-4979-a9b6-8561523c2d9d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.577740 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.577794 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.577806 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.577827 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.577840 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:08Z","lastTransitionTime":"2025-12-10T12:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.681321 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.681386 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.681404 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.681433 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.681450 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:08Z","lastTransitionTime":"2025-12-10T12:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.784271 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.784317 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.784332 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.784353 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.784367 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:08Z","lastTransitionTime":"2025-12-10T12:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.885901 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.885943 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.885953 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.885967 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.886004 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:08Z","lastTransitionTime":"2025-12-10T12:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.988315 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.988371 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.988381 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.988420 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:08 crc kubenswrapper[4689]: I1210 12:17:08.988436 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:08Z","lastTransitionTime":"2025-12-10T12:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.090543 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.090609 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.090632 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.090661 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.090684 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:09Z","lastTransitionTime":"2025-12-10T12:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.193307 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.193372 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.193399 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.193429 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.193451 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:09Z","lastTransitionTime":"2025-12-10T12:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.296613 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.296668 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.296683 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.296703 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.296718 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:09Z","lastTransitionTime":"2025-12-10T12:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.401692 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.401810 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.401848 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.401877 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.401898 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:09Z","lastTransitionTime":"2025-12-10T12:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.497156 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.497202 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.497306 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:17:09 crc kubenswrapper[4689]: E1210 12:17:09.497649 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:17:09 crc kubenswrapper[4689]: E1210 12:17:09.497730 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:17:09 crc kubenswrapper[4689]: E1210 12:17:09.497911 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.504503 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.504576 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.504594 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.504616 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.504631 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:09Z","lastTransitionTime":"2025-12-10T12:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.607791 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.607861 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.607883 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.607913 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.607935 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:09Z","lastTransitionTime":"2025-12-10T12:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.710961 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.711057 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.711078 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.711103 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.711120 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:09Z","lastTransitionTime":"2025-12-10T12:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.813832 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.813933 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.813959 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.814119 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.814148 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:09Z","lastTransitionTime":"2025-12-10T12:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.916477 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.916520 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.916542 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.916566 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:09 crc kubenswrapper[4689]: I1210 12:17:09.916581 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:09Z","lastTransitionTime":"2025-12-10T12:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.018562 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.018603 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.018627 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.018650 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.018665 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:10Z","lastTransitionTime":"2025-12-10T12:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.120908 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.120964 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.121001 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.121017 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.121028 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:10Z","lastTransitionTime":"2025-12-10T12:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.223648 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.223732 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.223747 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.223765 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.223778 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:10Z","lastTransitionTime":"2025-12-10T12:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.325812 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.325862 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.325874 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.325892 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.325906 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:10Z","lastTransitionTime":"2025-12-10T12:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.428149 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.428222 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.428239 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.428270 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.428288 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:10Z","lastTransitionTime":"2025-12-10T12:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.498215 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:17:10 crc kubenswrapper[4689]: E1210 12:17:10.498450 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.530894 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.530956 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.531009 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.531043 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.531063 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:10Z","lastTransitionTime":"2025-12-10T12:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.633253 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.633297 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.633309 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.633325 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.633336 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:10Z","lastTransitionTime":"2025-12-10T12:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.735655 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.735699 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.735708 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.735720 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.735729 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:10Z","lastTransitionTime":"2025-12-10T12:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.837517 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.837572 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.837588 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.837609 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.837628 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:10Z","lastTransitionTime":"2025-12-10T12:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.940220 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.940258 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.940266 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.940279 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:10 crc kubenswrapper[4689]: I1210 12:17:10.940290 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:10Z","lastTransitionTime":"2025-12-10T12:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.043129 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.043167 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.043177 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.043191 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.043201 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:11Z","lastTransitionTime":"2025-12-10T12:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.145929 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.145987 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.145999 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.146015 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.146028 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:11Z","lastTransitionTime":"2025-12-10T12:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.248358 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.248408 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.248423 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.248442 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.248456 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:11Z","lastTransitionTime":"2025-12-10T12:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.351779 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.351943 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.352010 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.352042 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.352066 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:11Z","lastTransitionTime":"2025-12-10T12:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.455051 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.455112 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.455133 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.455161 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.455184 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:11Z","lastTransitionTime":"2025-12-10T12:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.497385 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.497448 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.497401 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:17:11 crc kubenswrapper[4689]: E1210 12:17:11.497545 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:17:11 crc kubenswrapper[4689]: E1210 12:17:11.497668 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:17:11 crc kubenswrapper[4689]: E1210 12:17:11.497737 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.557524 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.557590 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.557601 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.557616 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.557626 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:11Z","lastTransitionTime":"2025-12-10T12:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.659528 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.659600 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.659610 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.659626 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.659636 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:11Z","lastTransitionTime":"2025-12-10T12:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.762407 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.762476 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.762486 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.762503 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.762513 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:11Z","lastTransitionTime":"2025-12-10T12:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.864499 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.864537 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.864546 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.864559 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.864567 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:11Z","lastTransitionTime":"2025-12-10T12:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.967262 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.967319 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.967330 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.967344 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:11 crc kubenswrapper[4689]: I1210 12:17:11.967353 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:11Z","lastTransitionTime":"2025-12-10T12:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.070591 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.070656 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.070674 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.070695 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.070710 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:12Z","lastTransitionTime":"2025-12-10T12:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.173002 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.173055 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.173069 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.173086 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.173099 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:12Z","lastTransitionTime":"2025-12-10T12:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.275591 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.275655 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.275663 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.275675 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.275683 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:12Z","lastTransitionTime":"2025-12-10T12:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.378933 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.379020 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.379042 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.379066 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.379083 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:12Z","lastTransitionTime":"2025-12-10T12:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.481558 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.481591 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.481599 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.481613 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.481623 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:12Z","lastTransitionTime":"2025-12-10T12:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.497754 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:17:12 crc kubenswrapper[4689]: E1210 12:17:12.497918 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.532504 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-r6wmt" podStartSLOduration=71.532487073 podStartE2EDuration="1m11.532487073s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:17:12.519048324 +0000 UTC m=+100.307129482" watchObservedRunningTime="2025-12-10 12:17:12.532487073 +0000 UTC m=+100.320568211" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.532669 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-dk9hq" podStartSLOduration=71.532665818 podStartE2EDuration="1m11.532665818s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:17:12.532465973 +0000 UTC m=+100.320547131" watchObservedRunningTime="2025-12-10 12:17:12.532665818 +0000 UTC m=+100.320746956" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.561551 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=50.561519454 podStartE2EDuration="50.561519454s" podCreationTimestamp="2025-12-10 12:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:17:12.561056893 +0000 UTC m=+100.349138051" watchObservedRunningTime="2025-12-10 12:17:12.561519454 +0000 UTC m=+100.349600632" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.584825 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.584873 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.584885 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.584903 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.584915 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:12Z","lastTransitionTime":"2025-12-10T12:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.614695 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podStartSLOduration=71.614673707 podStartE2EDuration="1m11.614673707s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:17:12.614293658 +0000 UTC m=+100.402374806" watchObservedRunningTime="2025-12-10 12:17:12.614673707 +0000 UTC m=+100.402754855" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.631738 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-k7kbl" podStartSLOduration=71.631715745 podStartE2EDuration="1m11.631715745s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:17:12.631285364 +0000 UTC m=+100.419366522" watchObservedRunningTime="2025-12-10 12:17:12.631715745 +0000 UTC m=+100.419796893" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.687762 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.687797 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.687808 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.687823 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.687836 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:12Z","lastTransitionTime":"2025-12-10T12:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.691952 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=12.691940141 podStartE2EDuration="12.691940141s" podCreationTimestamp="2025-12-10 12:17:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:17:12.668527987 +0000 UTC m=+100.456609135" watchObservedRunningTime="2025-12-10 12:17:12.691940141 +0000 UTC m=+100.480021289" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.709000 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=75.708941638 podStartE2EDuration="1m15.708941638s" podCreationTimestamp="2025-12-10 12:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:17:12.708713833 +0000 UTC m=+100.496794971" watchObservedRunningTime="2025-12-10 12:17:12.708941638 +0000 UTC m=+100.497022776" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.709558 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=75.709534122 podStartE2EDuration="1m15.709534122s" podCreationTimestamp="2025-12-10 12:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:17:12.692485034 +0000 UTC m=+100.480566212" watchObservedRunningTime="2025-12-10 12:17:12.709534122 +0000 UTC m=+100.497615260" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.771215 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6ffqh" podStartSLOduration=71.771198573 podStartE2EDuration="1m11.771198573s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:17:12.769919602 +0000 UTC m=+100.558000740" watchObservedRunningTime="2025-12-10 12:17:12.771198573 +0000 UTC m=+100.559279711" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.789943 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.790005 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.790019 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.790034 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.790046 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:12Z","lastTransitionTime":"2025-12-10T12:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.795493 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-77s7q" podStartSLOduration=71.795478419 podStartE2EDuration="1m11.795478419s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:17:12.79470539 +0000 UTC m=+100.582786528" watchObservedRunningTime="2025-12-10 12:17:12.795478419 +0000 UTC m=+100.583559557" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.892886 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.893144 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.893166 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.893558 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.893580 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:12Z","lastTransitionTime":"2025-12-10T12:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.996870 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.996910 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.996921 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.996937 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:12 crc kubenswrapper[4689]: I1210 12:17:12.996947 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:12Z","lastTransitionTime":"2025-12-10T12:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.099929 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.100070 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.100097 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.100127 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.100152 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:13Z","lastTransitionTime":"2025-12-10T12:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.202559 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.202604 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.202619 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.202638 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.202652 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:13Z","lastTransitionTime":"2025-12-10T12:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.305491 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.305537 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.305551 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.305570 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.305586 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:13Z","lastTransitionTime":"2025-12-10T12:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.408243 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.408309 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.408324 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.408341 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.408352 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:13Z","lastTransitionTime":"2025-12-10T12:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.497472 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.497728 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.498341 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:17:13 crc kubenswrapper[4689]: E1210 12:17:13.498342 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:17:13 crc kubenswrapper[4689]: E1210 12:17:13.498473 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:17:13 crc kubenswrapper[4689]: E1210 12:17:13.498664 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.511908 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.511997 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.512015 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.512042 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.512060 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:13Z","lastTransitionTime":"2025-12-10T12:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.522307 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.615368 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.615418 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.615435 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.615460 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.615481 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:13Z","lastTransitionTime":"2025-12-10T12:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.726163 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.726210 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.726224 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.726263 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.726277 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:13Z","lastTransitionTime":"2025-12-10T12:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.829526 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.829577 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.829594 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.829615 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.829629 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:13Z","lastTransitionTime":"2025-12-10T12:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.858183 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.858230 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.858241 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.858262 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.858275 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T12:17:13Z","lastTransitionTime":"2025-12-10T12:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.924267 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-v2vpg"] Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.924887 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v2vpg" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.928190 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.928321 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.929518 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.934140 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.952063 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f6a2fba-8a72-4b64-b060-5fb422bc483d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-v2vpg\" (UID: \"5f6a2fba-8a72-4b64-b060-5fb422bc483d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v2vpg" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.952302 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5f6a2fba-8a72-4b64-b060-5fb422bc483d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-v2vpg\" (UID: \"5f6a2fba-8a72-4b64-b060-5fb422bc483d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v2vpg" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.952395 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5f6a2fba-8a72-4b64-b060-5fb422bc483d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-v2vpg\" (UID: \"5f6a2fba-8a72-4b64-b060-5fb422bc483d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v2vpg" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.952509 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f6a2fba-8a72-4b64-b060-5fb422bc483d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-v2vpg\" (UID: \"5f6a2fba-8a72-4b64-b060-5fb422bc483d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v2vpg" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.952689 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5f6a2fba-8a72-4b64-b060-5fb422bc483d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-v2vpg\" (UID: \"5f6a2fba-8a72-4b64-b060-5fb422bc483d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v2vpg" Dec 10 12:17:13 crc kubenswrapper[4689]: I1210 12:17:13.980300 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=0.980264915 podStartE2EDuration="980.264915ms" podCreationTimestamp="2025-12-10 12:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:17:13.975289093 +0000 UTC m=+101.763370271" watchObservedRunningTime="2025-12-10 12:17:13.980264915 +0000 UTC m=+101.768346093" Dec 10 12:17:14 crc kubenswrapper[4689]: I1210 12:17:14.053294 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5f6a2fba-8a72-4b64-b060-5fb422bc483d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-v2vpg\" (UID: \"5f6a2fba-8a72-4b64-b060-5fb422bc483d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v2vpg" Dec 10 12:17:14 crc kubenswrapper[4689]: I1210 12:17:14.053381 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5f6a2fba-8a72-4b64-b060-5fb422bc483d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-v2vpg\" (UID: \"5f6a2fba-8a72-4b64-b060-5fb422bc483d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v2vpg" Dec 10 12:17:14 crc kubenswrapper[4689]: I1210 12:17:14.053429 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f6a2fba-8a72-4b64-b060-5fb422bc483d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-v2vpg\" (UID: \"5f6a2fba-8a72-4b64-b060-5fb422bc483d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v2vpg" Dec 10 12:17:14 crc kubenswrapper[4689]: I1210 12:17:14.053438 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5f6a2fba-8a72-4b64-b060-5fb422bc483d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-v2vpg\" (UID: \"5f6a2fba-8a72-4b64-b060-5fb422bc483d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v2vpg" Dec 10 12:17:14 crc kubenswrapper[4689]: I1210 12:17:14.053476 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5f6a2fba-8a72-4b64-b060-5fb422bc483d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-v2vpg\" (UID: \"5f6a2fba-8a72-4b64-b060-5fb422bc483d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v2vpg" Dec 10 12:17:14 crc kubenswrapper[4689]: I1210 12:17:14.053535 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5f6a2fba-8a72-4b64-b060-5fb422bc483d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-v2vpg\" (UID: \"5f6a2fba-8a72-4b64-b060-5fb422bc483d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v2vpg" Dec 10 12:17:14 crc kubenswrapper[4689]: I1210 12:17:14.053604 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f6a2fba-8a72-4b64-b060-5fb422bc483d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-v2vpg\" (UID: \"5f6a2fba-8a72-4b64-b060-5fb422bc483d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v2vpg" Dec 10 12:17:14 crc kubenswrapper[4689]: I1210 12:17:14.054530 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5f6a2fba-8a72-4b64-b060-5fb422bc483d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-v2vpg\" (UID: \"5f6a2fba-8a72-4b64-b060-5fb422bc483d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v2vpg" Dec 10 12:17:14 crc kubenswrapper[4689]: I1210 12:17:14.065283 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f6a2fba-8a72-4b64-b060-5fb422bc483d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-v2vpg\" (UID: \"5f6a2fba-8a72-4b64-b060-5fb422bc483d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v2vpg" Dec 10 12:17:14 crc kubenswrapper[4689]: I1210 12:17:14.087284 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f6a2fba-8a72-4b64-b060-5fb422bc483d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-v2vpg\" (UID: \"5f6a2fba-8a72-4b64-b060-5fb422bc483d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v2vpg" Dec 10 12:17:14 crc kubenswrapper[4689]: I1210 12:17:14.251230 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v2vpg" Dec 10 12:17:14 crc kubenswrapper[4689]: I1210 12:17:14.416558 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v2vpg" event={"ID":"5f6a2fba-8a72-4b64-b060-5fb422bc483d","Type":"ContainerStarted","Data":"e1d9d3b1d9f1303457f04543f6020d810fd2d4f1e938a00634b86f3ff6c4cfbc"} Dec 10 12:17:14 crc kubenswrapper[4689]: I1210 12:17:14.498387 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:17:14 crc kubenswrapper[4689]: E1210 12:17:14.499251 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:17:15 crc kubenswrapper[4689]: I1210 12:17:15.421615 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v2vpg" event={"ID":"5f6a2fba-8a72-4b64-b060-5fb422bc483d","Type":"ContainerStarted","Data":"bfee09e3fe75056a4794f5994684461b65cb7e57f4aa677390843bf4b1c26dab"} Dec 10 12:17:15 crc kubenswrapper[4689]: I1210 12:17:15.497194 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:17:15 crc kubenswrapper[4689]: E1210 12:17:15.497388 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:17:15 crc kubenswrapper[4689]: I1210 12:17:15.497479 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:17:15 crc kubenswrapper[4689]: E1210 12:17:15.497561 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:17:15 crc kubenswrapper[4689]: I1210 12:17:15.497633 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:17:15 crc kubenswrapper[4689]: E1210 12:17:15.497730 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:17:16 crc kubenswrapper[4689]: I1210 12:17:16.497469 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:17:16 crc kubenswrapper[4689]: E1210 12:17:16.497662 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:17:17 crc kubenswrapper[4689]: I1210 12:17:17.497493 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:17:17 crc kubenswrapper[4689]: I1210 12:17:17.497523 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:17:17 crc kubenswrapper[4689]: E1210 12:17:17.497611 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:17:17 crc kubenswrapper[4689]: I1210 12:17:17.497696 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:17:17 crc kubenswrapper[4689]: E1210 12:17:17.498132 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:17:17 crc kubenswrapper[4689]: E1210 12:17:17.498273 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:17:18 crc kubenswrapper[4689]: I1210 12:17:18.497237 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:17:18 crc kubenswrapper[4689]: E1210 12:17:18.497492 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:17:19 crc kubenswrapper[4689]: I1210 12:17:19.497205 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:17:19 crc kubenswrapper[4689]: I1210 12:17:19.497202 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:17:19 crc kubenswrapper[4689]: E1210 12:17:19.497446 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:17:19 crc kubenswrapper[4689]: E1210 12:17:19.497567 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:17:19 crc kubenswrapper[4689]: I1210 12:17:19.497202 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:17:19 crc kubenswrapper[4689]: E1210 12:17:19.497753 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:17:19 crc kubenswrapper[4689]: I1210 12:17:19.712156 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8-metrics-certs\") pod \"network-metrics-daemon-2h8hs\" (UID: \"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8\") " pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:17:19 crc kubenswrapper[4689]: E1210 12:17:19.712400 4689 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 12:17:19 crc kubenswrapper[4689]: E1210 12:17:19.712538 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8-metrics-certs podName:3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8 nodeName:}" failed. No retries permitted until 2025-12-10 12:18:23.71250034 +0000 UTC m=+171.500581598 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8-metrics-certs") pod "network-metrics-daemon-2h8hs" (UID: "3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 12:17:20 crc kubenswrapper[4689]: I1210 12:17:20.497994 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:17:20 crc kubenswrapper[4689]: E1210 12:17:20.498218 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:17:20 crc kubenswrapper[4689]: I1210 12:17:20.499418 4689 scope.go:117] "RemoveContainer" containerID="bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1" Dec 10 12:17:20 crc kubenswrapper[4689]: E1210 12:17:20.499684 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-d5s24_openshift-ovn-kubernetes(b434fe8e-c4c2-4979-a9b6-8561523c2d9d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" Dec 10 12:17:21 crc kubenswrapper[4689]: I1210 12:17:21.497859 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:17:21 crc kubenswrapper[4689]: I1210 12:17:21.497891 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:17:21 crc kubenswrapper[4689]: I1210 12:17:21.498018 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:17:21 crc kubenswrapper[4689]: E1210 12:17:21.498033 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:17:21 crc kubenswrapper[4689]: E1210 12:17:21.498305 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:17:21 crc kubenswrapper[4689]: E1210 12:17:21.498373 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:17:22 crc kubenswrapper[4689]: I1210 12:17:22.497623 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:17:22 crc kubenswrapper[4689]: E1210 12:17:22.499572 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:17:23 crc kubenswrapper[4689]: I1210 12:17:23.497920 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:17:23 crc kubenswrapper[4689]: I1210 12:17:23.498022 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:17:23 crc kubenswrapper[4689]: I1210 12:17:23.498019 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:17:23 crc kubenswrapper[4689]: E1210 12:17:23.498154 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:17:23 crc kubenswrapper[4689]: E1210 12:17:23.498404 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:17:23 crc kubenswrapper[4689]: E1210 12:17:23.498522 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:17:24 crc kubenswrapper[4689]: I1210 12:17:24.498137 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:17:24 crc kubenswrapper[4689]: E1210 12:17:24.498395 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:17:25 crc kubenswrapper[4689]: I1210 12:17:25.498176 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:17:25 crc kubenswrapper[4689]: I1210 12:17:25.498176 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:17:25 crc kubenswrapper[4689]: E1210 12:17:25.498394 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:17:25 crc kubenswrapper[4689]: I1210 12:17:25.498231 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:17:25 crc kubenswrapper[4689]: E1210 12:17:25.498623 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:17:25 crc kubenswrapper[4689]: E1210 12:17:25.498675 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:17:26 crc kubenswrapper[4689]: I1210 12:17:26.497327 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:17:26 crc kubenswrapper[4689]: E1210 12:17:26.497513 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:17:27 crc kubenswrapper[4689]: I1210 12:17:27.497567 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:17:27 crc kubenswrapper[4689]: I1210 12:17:27.497627 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:17:27 crc kubenswrapper[4689]: I1210 12:17:27.497669 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:17:27 crc kubenswrapper[4689]: E1210 12:17:27.498022 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:17:27 crc kubenswrapper[4689]: E1210 12:17:27.498170 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:17:27 crc kubenswrapper[4689]: E1210 12:17:27.498324 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:17:28 crc kubenswrapper[4689]: I1210 12:17:28.498117 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:17:28 crc kubenswrapper[4689]: E1210 12:17:28.498286 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:17:29 crc kubenswrapper[4689]: I1210 12:17:29.498282 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:17:29 crc kubenswrapper[4689]: I1210 12:17:29.498366 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:17:29 crc kubenswrapper[4689]: I1210 12:17:29.498318 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:17:29 crc kubenswrapper[4689]: E1210 12:17:29.498508 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:17:29 crc kubenswrapper[4689]: E1210 12:17:29.498620 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:17:29 crc kubenswrapper[4689]: E1210 12:17:29.498756 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:17:30 crc kubenswrapper[4689]: I1210 12:17:30.497645 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:17:30 crc kubenswrapper[4689]: E1210 12:17:30.497892 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:17:31 crc kubenswrapper[4689]: I1210 12:17:31.497679 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:17:31 crc kubenswrapper[4689]: I1210 12:17:31.497720 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:17:31 crc kubenswrapper[4689]: I1210 12:17:31.497679 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:17:31 crc kubenswrapper[4689]: E1210 12:17:31.497850 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:17:31 crc kubenswrapper[4689]: E1210 12:17:31.498083 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:17:31 crc kubenswrapper[4689]: E1210 12:17:31.498306 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:17:32 crc kubenswrapper[4689]: I1210 12:17:32.498345 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:17:32 crc kubenswrapper[4689]: E1210 12:17:32.499316 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:17:32 crc kubenswrapper[4689]: E1210 12:17:32.530153 4689 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 10 12:17:33 crc kubenswrapper[4689]: I1210 12:17:33.497421 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:17:33 crc kubenswrapper[4689]: I1210 12:17:33.497531 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:17:33 crc kubenswrapper[4689]: I1210 12:17:33.497594 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:17:33 crc kubenswrapper[4689]: E1210 12:17:33.497729 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:17:33 crc kubenswrapper[4689]: E1210 12:17:33.498514 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:17:33 crc kubenswrapper[4689]: E1210 12:17:33.498645 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:17:33 crc kubenswrapper[4689]: I1210 12:17:33.498892 4689 scope.go:117] "RemoveContainer" containerID="bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1" Dec 10 12:17:33 crc kubenswrapper[4689]: E1210 12:17:33.499297 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-d5s24_openshift-ovn-kubernetes(b434fe8e-c4c2-4979-a9b6-8561523c2d9d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" Dec 10 12:17:33 crc kubenswrapper[4689]: E1210 12:17:33.993450 4689 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 12:17:34 crc kubenswrapper[4689]: I1210 12:17:34.496223 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r6wmt_3713b4f8-2ee3-4078-859a-dca17076f9a6/kube-multus/1.log" Dec 10 12:17:34 crc kubenswrapper[4689]: I1210 12:17:34.497348 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:17:34 crc kubenswrapper[4689]: E1210 12:17:34.497547 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:17:34 crc kubenswrapper[4689]: I1210 12:17:34.497664 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r6wmt_3713b4f8-2ee3-4078-859a-dca17076f9a6/kube-multus/0.log" Dec 10 12:17:34 crc kubenswrapper[4689]: I1210 12:17:34.497715 4689 generic.go:334] "Generic (PLEG): container finished" podID="3713b4f8-2ee3-4078-859a-dca17076f9a6" containerID="60b9e7f27ed577b35c0ecc9f2f205e63e25d0fc8c22e4e49e13cd395ff14a83e" exitCode=1 Dec 10 12:17:34 crc kubenswrapper[4689]: I1210 12:17:34.505199 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r6wmt" event={"ID":"3713b4f8-2ee3-4078-859a-dca17076f9a6","Type":"ContainerDied","Data":"60b9e7f27ed577b35c0ecc9f2f205e63e25d0fc8c22e4e49e13cd395ff14a83e"} Dec 10 12:17:34 crc kubenswrapper[4689]: I1210 12:17:34.505271 4689 scope.go:117] "RemoveContainer" containerID="41b7cefca05f1180d687337ba2a408d0db12aad394a7e9619e004758e5d79ca0" Dec 10 12:17:34 crc kubenswrapper[4689]: I1210 12:17:34.506130 4689 scope.go:117] "RemoveContainer" containerID="60b9e7f27ed577b35c0ecc9f2f205e63e25d0fc8c22e4e49e13cd395ff14a83e" Dec 10 12:17:34 crc kubenswrapper[4689]: E1210 12:17:34.506844 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-r6wmt_openshift-multus(3713b4f8-2ee3-4078-859a-dca17076f9a6)\"" pod="openshift-multus/multus-r6wmt" podUID="3713b4f8-2ee3-4078-859a-dca17076f9a6" Dec 10 12:17:34 crc kubenswrapper[4689]: I1210 12:17:34.538468 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v2vpg" podStartSLOduration=93.538438862 podStartE2EDuration="1m33.538438862s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:17:15.439207811 +0000 UTC m=+103.227288959" watchObservedRunningTime="2025-12-10 12:17:34.538438862 +0000 UTC m=+122.326520040" Dec 10 12:17:35 crc kubenswrapper[4689]: I1210 12:17:35.497787 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:17:35 crc kubenswrapper[4689]: I1210 12:17:35.497882 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:17:35 crc kubenswrapper[4689]: E1210 12:17:35.498049 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:17:35 crc kubenswrapper[4689]: E1210 12:17:35.498251 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:17:35 crc kubenswrapper[4689]: I1210 12:17:35.498383 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:17:35 crc kubenswrapper[4689]: E1210 12:17:35.498671 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:17:35 crc kubenswrapper[4689]: I1210 12:17:35.505159 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r6wmt_3713b4f8-2ee3-4078-859a-dca17076f9a6/kube-multus/1.log" Dec 10 12:17:36 crc kubenswrapper[4689]: I1210 12:17:36.497700 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:17:36 crc kubenswrapper[4689]: E1210 12:17:36.497882 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:17:37 crc kubenswrapper[4689]: I1210 12:17:37.497334 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:17:37 crc kubenswrapper[4689]: I1210 12:17:37.497370 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:17:37 crc kubenswrapper[4689]: I1210 12:17:37.497456 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:17:37 crc kubenswrapper[4689]: E1210 12:17:37.497601 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:17:37 crc kubenswrapper[4689]: E1210 12:17:37.497749 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:17:37 crc kubenswrapper[4689]: E1210 12:17:37.497911 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:17:38 crc kubenswrapper[4689]: I1210 12:17:38.498057 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:17:38 crc kubenswrapper[4689]: E1210 12:17:38.498248 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:17:38 crc kubenswrapper[4689]: E1210 12:17:38.995918 4689 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 12:17:39 crc kubenswrapper[4689]: I1210 12:17:39.498213 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:17:39 crc kubenswrapper[4689]: I1210 12:17:39.498289 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:17:39 crc kubenswrapper[4689]: I1210 12:17:39.498233 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:17:39 crc kubenswrapper[4689]: E1210 12:17:39.498511 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:17:39 crc kubenswrapper[4689]: E1210 12:17:39.498619 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:17:39 crc kubenswrapper[4689]: E1210 12:17:39.498800 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:17:40 crc kubenswrapper[4689]: I1210 12:17:40.498548 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:17:40 crc kubenswrapper[4689]: E1210 12:17:40.498731 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:17:41 crc kubenswrapper[4689]: I1210 12:17:41.497368 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:17:41 crc kubenswrapper[4689]: I1210 12:17:41.497460 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:17:41 crc kubenswrapper[4689]: I1210 12:17:41.497510 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:17:41 crc kubenswrapper[4689]: E1210 12:17:41.498281 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:17:41 crc kubenswrapper[4689]: E1210 12:17:41.498578 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:17:41 crc kubenswrapper[4689]: E1210 12:17:41.498655 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:17:42 crc kubenswrapper[4689]: I1210 12:17:42.497375 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:17:42 crc kubenswrapper[4689]: E1210 12:17:42.502567 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:17:43 crc kubenswrapper[4689]: I1210 12:17:43.497688 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:17:43 crc kubenswrapper[4689]: I1210 12:17:43.497729 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:17:43 crc kubenswrapper[4689]: E1210 12:17:43.498030 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:17:43 crc kubenswrapper[4689]: E1210 12:17:43.498140 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:17:43 crc kubenswrapper[4689]: I1210 12:17:43.498292 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:17:43 crc kubenswrapper[4689]: E1210 12:17:43.500172 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:17:43 crc kubenswrapper[4689]: E1210 12:17:43.996852 4689 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 12:17:44 crc kubenswrapper[4689]: I1210 12:17:44.497410 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:17:44 crc kubenswrapper[4689]: E1210 12:17:44.497595 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:17:44 crc kubenswrapper[4689]: I1210 12:17:44.499216 4689 scope.go:117] "RemoveContainer" containerID="bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1" Dec 10 12:17:45 crc kubenswrapper[4689]: I1210 12:17:45.497685 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:17:45 crc kubenswrapper[4689]: I1210 12:17:45.497770 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:17:45 crc kubenswrapper[4689]: E1210 12:17:45.498211 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:17:45 crc kubenswrapper[4689]: I1210 12:17:45.497788 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:17:45 crc kubenswrapper[4689]: E1210 12:17:45.498343 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:17:45 crc kubenswrapper[4689]: E1210 12:17:45.498937 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:17:45 crc kubenswrapper[4689]: I1210 12:17:45.539485 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d5s24_b434fe8e-c4c2-4979-a9b6-8561523c2d9d/ovnkube-controller/3.log" Dec 10 12:17:45 crc kubenswrapper[4689]: I1210 12:17:45.544799 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" event={"ID":"b434fe8e-c4c2-4979-a9b6-8561523c2d9d","Type":"ContainerStarted","Data":"6c431d5019498721f833fd36391e4a59baa27e8b5605feec6ae956436a5391a2"} Dec 10 12:17:45 crc kubenswrapper[4689]: I1210 12:17:45.545758 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:17:45 crc kubenswrapper[4689]: I1210 12:17:45.554817 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2h8hs"] Dec 10 12:17:45 crc kubenswrapper[4689]: I1210 12:17:45.555315 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:17:45 crc kubenswrapper[4689]: E1210 12:17:45.555711 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:17:45 crc kubenswrapper[4689]: I1210 12:17:45.601541 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" podStartSLOduration=104.601513892 podStartE2EDuration="1m44.601513892s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:17:45.599735198 +0000 UTC m=+133.387816346" watchObservedRunningTime="2025-12-10 12:17:45.601513892 +0000 UTC m=+133.389595060" Dec 10 12:17:46 crc kubenswrapper[4689]: I1210 12:17:46.497359 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:17:46 crc kubenswrapper[4689]: E1210 12:17:46.497621 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:17:47 crc kubenswrapper[4689]: I1210 12:17:47.497170 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:17:47 crc kubenswrapper[4689]: I1210 12:17:47.497215 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:17:47 crc kubenswrapper[4689]: I1210 12:17:47.497183 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:17:47 crc kubenswrapper[4689]: E1210 12:17:47.497309 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:17:47 crc kubenswrapper[4689]: E1210 12:17:47.497408 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:17:47 crc kubenswrapper[4689]: E1210 12:17:47.497638 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:17:48 crc kubenswrapper[4689]: I1210 12:17:48.498079 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:17:48 crc kubenswrapper[4689]: E1210 12:17:48.498197 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:17:48 crc kubenswrapper[4689]: E1210 12:17:48.998016 4689 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 12:17:49 crc kubenswrapper[4689]: I1210 12:17:49.497333 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:17:49 crc kubenswrapper[4689]: I1210 12:17:49.497465 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:17:49 crc kubenswrapper[4689]: E1210 12:17:49.497833 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:17:49 crc kubenswrapper[4689]: I1210 12:17:49.497880 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:17:49 crc kubenswrapper[4689]: E1210 12:17:49.498075 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:17:49 crc kubenswrapper[4689]: E1210 12:17:49.498273 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:17:49 crc kubenswrapper[4689]: I1210 12:17:49.498514 4689 scope.go:117] "RemoveContainer" containerID="60b9e7f27ed577b35c0ecc9f2f205e63e25d0fc8c22e4e49e13cd395ff14a83e" Dec 10 12:17:50 crc kubenswrapper[4689]: I1210 12:17:50.497765 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:17:50 crc kubenswrapper[4689]: E1210 12:17:50.498315 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:17:50 crc kubenswrapper[4689]: I1210 12:17:50.566323 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r6wmt_3713b4f8-2ee3-4078-859a-dca17076f9a6/kube-multus/1.log" Dec 10 12:17:50 crc kubenswrapper[4689]: I1210 12:17:50.566429 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r6wmt" event={"ID":"3713b4f8-2ee3-4078-859a-dca17076f9a6","Type":"ContainerStarted","Data":"155a09b4097a5aad76a37c3319f3a1e4925daeeba3803b5adba74775d48e8d02"} Dec 10 12:17:51 crc kubenswrapper[4689]: I1210 12:17:51.497783 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:17:51 crc kubenswrapper[4689]: I1210 12:17:51.498121 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:17:51 crc kubenswrapper[4689]: E1210 12:17:51.498271 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:17:51 crc kubenswrapper[4689]: I1210 12:17:51.498546 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:17:51 crc kubenswrapper[4689]: E1210 12:17:51.498561 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:17:51 crc kubenswrapper[4689]: E1210 12:17:51.499749 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:17:52 crc kubenswrapper[4689]: I1210 12:17:52.498230 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:17:52 crc kubenswrapper[4689]: E1210 12:17:52.499803 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 12:17:53 crc kubenswrapper[4689]: I1210 12:17:53.498115 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:17:53 crc kubenswrapper[4689]: I1210 12:17:53.498139 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:17:53 crc kubenswrapper[4689]: E1210 12:17:53.498837 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 12:17:53 crc kubenswrapper[4689]: I1210 12:17:53.498139 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:17:53 crc kubenswrapper[4689]: E1210 12:17:53.499011 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 12:17:53 crc kubenswrapper[4689]: E1210 12:17:53.499196 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2h8hs" podUID="3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.498399 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.500926 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.501055 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.632337 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.684280 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-g8snw"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.685396 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.697425 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.697511 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.697514 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.697444 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.697103 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.698242 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.698352 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.698701 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.699106 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.699264 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.699923 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-cpfdd"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.700888 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cpfdd" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.705834 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-fz4nm"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.706830 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz4nm" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.708503 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.709806 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-htx2z"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.714574 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.714820 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.715332 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.716001 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rn6ml"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.716093 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-htx2z" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.716381 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.716887 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rn6ml" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.717240 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.717433 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.717590 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.717243 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.717758 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.717914 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.718875 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.724823 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-2gdrs"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.725474 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-s8lmw"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.726207 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-s8lmw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.726878 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-2gdrs" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.727587 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7jsv6"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.727789 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-config\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.727864 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-audit-dir\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.727906 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-etcd-serving-ca\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.727941 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-serving-cert\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.728021 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-trusted-ca-bundle\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.728079 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-image-import-ca\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.728128 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7qvx\" (UniqueName: \"kubernetes.io/projected/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-kube-api-access-p7qvx\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.728085 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.728198 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-encryption-config\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.728254 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-node-pullsecrets\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.728284 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-audit\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.728329 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-etcd-client\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.729234 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sfzm5"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.729733 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sfzm5" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.735331 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.735445 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.745641 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ggzn5"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.746548 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7pv69"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.746814 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.747626 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.747690 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7pv69" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.748459 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-ggzn5" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.752413 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms4wp"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.758332 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms4wp" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.777553 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.777757 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.777897 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.777937 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.777989 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.778024 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.778179 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.778187 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.778198 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.778378 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mx2mm"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.778396 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.778444 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.778463 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.778527 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.778573 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.778663 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.778673 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.778743 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.778785 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.778789 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.778675 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.778900 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.778920 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mx2mm" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.778993 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.779019 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.778849 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-9s8ps"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.779097 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.779229 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.779706 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-kpp45"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.780145 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-kpp45" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.780240 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.780386 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9s8ps" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.780448 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.780622 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.780696 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.781667 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.782382 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.782490 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.782590 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.782687 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.782791 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.783224 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.783293 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.783322 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.783298 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.783501 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.783530 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.783569 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.783580 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.783634 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.783536 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.783687 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.783743 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.783813 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.783819 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.788123 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.788452 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.788619 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jw2qz"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.788663 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.789895 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.790183 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.790341 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.790495 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.801201 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-shthb"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.801639 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-shthb" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.801901 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.807859 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.808771 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.809077 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.809301 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qc8tj"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.809326 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.809951 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.810383 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-qc8tj" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.811495 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.812316 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.813146 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.813206 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.813241 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.813345 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.813474 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.814082 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.833059 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7v6dg"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.834186 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-dpmvq"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.834933 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dpmvq" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.835546 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7v6dg" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.837190 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.837384 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.837563 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.837758 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.837825 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f02b797-c51d-4614-b1d6-b2b3294342d0-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ms4wp\" (UID: \"8f02b797-c51d-4614-b1d6-b2b3294342d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms4wp" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.837872 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f02b797-c51d-4614-b1d6-b2b3294342d0-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ms4wp\" (UID: \"8f02b797-c51d-4614-b1d6-b2b3294342d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms4wp" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.837938 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-serving-cert\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.838005 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d9195e8d-4ad5-406b-b2fe-1c107df51433-audit-dir\") pod \"apiserver-7bbb656c7d-mdqrt\" (UID: \"d9195e8d-4ad5-406b-b2fe-1c107df51433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.838041 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.838067 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eb8e1ca-e0fa-448e-a502-a75ac460a5ca-config\") pod \"machine-approver-56656f9798-cpfdd\" (UID: \"1eb8e1ca-e0fa-448e-a502-a75ac460a5ca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cpfdd" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.838184 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvwvd\" (UniqueName: \"kubernetes.io/projected/d9195e8d-4ad5-406b-b2fe-1c107df51433-kube-api-access-bvwvd\") pod \"apiserver-7bbb656c7d-mdqrt\" (UID: \"d9195e8d-4ad5-406b-b2fe-1c107df51433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.838201 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.838220 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a3f8229-cf04-4e18-8dc3-c9a63c4b46b4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rn6ml\" (UID: \"9a3f8229-cf04-4e18-8dc3-c9a63c4b46b4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rn6ml" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.838273 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-image-import-ca\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.838301 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d9195e8d-4ad5-406b-b2fe-1c107df51433-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mdqrt\" (UID: \"d9195e8d-4ad5-406b-b2fe-1c107df51433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.838326 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hvcm\" (UniqueName: \"kubernetes.io/projected/ee767cde-d698-4c01-b221-33c158999e60-kube-api-access-4hvcm\") pod \"console-f9d7485db-9s8ps\" (UID: \"ee767cde-d698-4c01-b221-33c158999e60\") " pod="openshift-console/console-f9d7485db-9s8ps" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.838373 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b81b90f7-72e4-4a8d-881f-d117310a4a25-serving-cert\") pod \"openshift-config-operator-7777fb866f-fz4nm\" (UID: \"b81b90f7-72e4-4a8d-881f-d117310a4a25\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz4nm" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.838399 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd353a2b-c325-44b6-9e25-6a4c39213f9e-client-ca\") pod \"controller-manager-879f6c89f-htx2z\" (UID: \"fd353a2b-c325-44b6-9e25-6a4c39213f9e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-htx2z" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.838425 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7qvx\" (UniqueName: \"kubernetes.io/projected/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-kube-api-access-p7qvx\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.838450 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.838534 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a3f8229-cf04-4e18-8dc3-c9a63c4b46b4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rn6ml\" (UID: \"9a3f8229-cf04-4e18-8dc3-c9a63c4b46b4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rn6ml" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.838566 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ee767cde-d698-4c01-b221-33c158999e60-oauth-serving-cert\") pod \"console-f9d7485db-9s8ps\" (UID: \"ee767cde-d698-4c01-b221-33c158999e60\") " pod="openshift-console/console-f9d7485db-9s8ps" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.838592 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86359667-d51a-449a-925d-375a1e98a8ca-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-2gdrs\" (UID: \"86359667-d51a-449a-925d-375a1e98a8ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2gdrs" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.838671 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmpvl\" (UniqueName: \"kubernetes.io/projected/cd45363e-de7e-4e91-afe1-f82948764f4f-kube-api-access-tmpvl\") pod \"downloads-7954f5f757-kpp45\" (UID: \"cd45363e-de7e-4e91-afe1-f82948764f4f\") " pod="openshift-console/downloads-7954f5f757-kpp45" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.838700 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/481fe942-34c8-42d9-9925-a674da9ca453-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mx2mm\" (UID: \"481fe942-34c8-42d9-9925-a674da9ca453\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mx2mm" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.838733 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-encryption-config\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.838762 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t6xj\" (UniqueName: \"kubernetes.io/projected/3eece420-e32c-4ea1-91f8-cc96bf144467-kube-api-access-6t6xj\") pod \"machine-api-operator-5694c8668f-s8lmw\" (UID: \"3eece420-e32c-4ea1-91f8-cc96bf144467\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s8lmw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.838856 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w5pl\" (UniqueName: \"kubernetes.io/projected/b81b90f7-72e4-4a8d-881f-d117310a4a25-kube-api-access-5w5pl\") pod \"openshift-config-operator-7777fb866f-fz4nm\" (UID: \"b81b90f7-72e4-4a8d-881f-d117310a4a25\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz4nm" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.838885 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.838915 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5pth\" (UniqueName: \"kubernetes.io/projected/2aed137f-a24c-42c9-a0af-9ada3eeffa88-kube-api-access-t5pth\") pod \"console-operator-58897d9998-ggzn5\" (UID: \"2aed137f-a24c-42c9-a0af-9ada3eeffa88\") " pod="openshift-console-operator/console-operator-58897d9998-ggzn5" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.839031 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-etcd-client\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.839058 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3eece420-e32c-4ea1-91f8-cc96bf144467-images\") pod \"machine-api-operator-5694c8668f-s8lmw\" (UID: \"3eece420-e32c-4ea1-91f8-cc96bf144467\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s8lmw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.839099 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2aed137f-a24c-42c9-a0af-9ada3eeffa88-trusted-ca\") pod \"console-operator-58897d9998-ggzn5\" (UID: \"2aed137f-a24c-42c9-a0af-9ada3eeffa88\") " pod="openshift-console-operator/console-operator-58897d9998-ggzn5" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.839267 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.839292 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f02b797-c51d-4614-b1d6-b2b3294342d0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ms4wp\" (UID: \"8f02b797-c51d-4614-b1d6-b2b3294342d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms4wp" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.839339 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-audit-dir\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.839433 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-image-import-ca\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.839469 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.839501 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf4w7\" (UniqueName: \"kubernetes.io/projected/2e2364cc-104f-4237-9ad5-c121a1c3fba6-kube-api-access-qf4w7\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.839526 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxwpq\" (UniqueName: \"kubernetes.io/projected/d8990fd9-cdfb-4c98-80b3-794f1b371ee5-kube-api-access-kxwpq\") pod \"route-controller-manager-6576b87f9c-sfzm5\" (UID: \"d8990fd9-cdfb-4c98-80b3-794f1b371ee5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sfzm5" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.839859 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/762dce0f-3636-41ab-8d78-1b69e0fa714a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7pv69\" (UID: \"762dce0f-3636-41ab-8d78-1b69e0fa714a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7pv69" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.839897 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-etcd-serving-ca\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.840027 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.840067 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.840133 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7dbh\" (UniqueName: \"kubernetes.io/projected/9a3f8229-cf04-4e18-8dc3-c9a63c4b46b4-kube-api-access-b7dbh\") pod \"openshift-apiserver-operator-796bbdcf4f-rn6ml\" (UID: \"9a3f8229-cf04-4e18-8dc3-c9a63c4b46b4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rn6ml" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.840710 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d9195e8d-4ad5-406b-b2fe-1c107df51433-etcd-client\") pod \"apiserver-7bbb656c7d-mdqrt\" (UID: \"d9195e8d-4ad5-406b-b2fe-1c107df51433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.840829 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9195e8d-4ad5-406b-b2fe-1c107df51433-serving-cert\") pod \"apiserver-7bbb656c7d-mdqrt\" (UID: \"d9195e8d-4ad5-406b-b2fe-1c107df51433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.863083 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2aed137f-a24c-42c9-a0af-9ada3eeffa88-config\") pod \"console-operator-58897d9998-ggzn5\" (UID: \"2aed137f-a24c-42c9-a0af-9ada3eeffa88\") " pod="openshift-console-operator/console-operator-58897d9998-ggzn5" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.863600 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.863947 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9rk82"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.864275 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-serving-cert\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.864347 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-audit-dir\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.864497 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9rk82" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.863961 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.864936 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-etcd-serving-ca\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.864500 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-etcd-client\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.864328 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.865734 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvxwm\" (UniqueName: \"kubernetes.io/projected/1eb8e1ca-e0fa-448e-a502-a75ac460a5ca-kube-api-access-wvxwm\") pod \"machine-approver-56656f9798-cpfdd\" (UID: \"1eb8e1ca-e0fa-448e-a502-a75ac460a5ca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cpfdd" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.865784 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2e2364cc-104f-4237-9ad5-c121a1c3fba6-audit-dir\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.865792 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.865931 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b81b90f7-72e4-4a8d-881f-d117310a4a25-available-featuregates\") pod \"openshift-config-operator-7777fb866f-fz4nm\" (UID: \"b81b90f7-72e4-4a8d-881f-d117310a4a25\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz4nm" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.865987 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.866111 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-encryption-config\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.866148 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-xmwb5"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.866187 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd353a2b-c325-44b6-9e25-6a4c39213f9e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-htx2z\" (UID: \"fd353a2b-c325-44b6-9e25-6a4c39213f9e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-htx2z" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.866273 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.866408 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-trusted-ca-bundle\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.866455 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eece420-e32c-4ea1-91f8-cc96bf144467-config\") pod \"machine-api-operator-5694c8668f-s8lmw\" (UID: \"3eece420-e32c-4ea1-91f8-cc96bf144467\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s8lmw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.866534 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xmwb5" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.866564 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/481fe942-34c8-42d9-9925-a674da9ca453-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mx2mm\" (UID: \"481fe942-34c8-42d9-9925-a674da9ca453\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mx2mm" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.866585 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x488g\" (UniqueName: \"kubernetes.io/projected/86359667-d51a-449a-925d-375a1e98a8ca-kube-api-access-x488g\") pod \"authentication-operator-69f744f599-2gdrs\" (UID: \"86359667-d51a-449a-925d-375a1e98a8ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2gdrs" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.866613 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd353a2b-c325-44b6-9e25-6a4c39213f9e-config\") pod \"controller-manager-879f6c89f-htx2z\" (UID: \"fd353a2b-c325-44b6-9e25-6a4c39213f9e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-htx2z" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.866630 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cm2v\" (UniqueName: \"kubernetes.io/projected/481fe942-34c8-42d9-9925-a674da9ca453-kube-api-access-6cm2v\") pod \"openshift-controller-manager-operator-756b6f6bc6-mx2mm\" (UID: \"481fe942-34c8-42d9-9925-a674da9ca453\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mx2mm" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.866695 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.866716 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8990fd9-cdfb-4c98-80b3-794f1b371ee5-serving-cert\") pod \"route-controller-manager-6576b87f9c-sfzm5\" (UID: \"d8990fd9-cdfb-4c98-80b3-794f1b371ee5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sfzm5" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.866734 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee767cde-d698-4c01-b221-33c158999e60-console-serving-cert\") pod \"console-f9d7485db-9s8ps\" (UID: \"ee767cde-d698-4c01-b221-33c158999e60\") " pod="openshift-console/console-f9d7485db-9s8ps" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.866750 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8990fd9-cdfb-4c98-80b3-794f1b371ee5-client-ca\") pod \"route-controller-manager-6576b87f9c-sfzm5\" (UID: \"d8990fd9-cdfb-4c98-80b3-794f1b371ee5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sfzm5" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.866776 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d9195e8d-4ad5-406b-b2fe-1c107df51433-audit-policies\") pod \"apiserver-7bbb656c7d-mdqrt\" (UID: \"d9195e8d-4ad5-406b-b2fe-1c107df51433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.866792 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d9195e8d-4ad5-406b-b2fe-1c107df51433-encryption-config\") pod \"apiserver-7bbb656c7d-mdqrt\" (UID: \"d9195e8d-4ad5-406b-b2fe-1c107df51433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.866816 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86359667-d51a-449a-925d-375a1e98a8ca-serving-cert\") pod \"authentication-operator-69f744f599-2gdrs\" (UID: \"86359667-d51a-449a-925d-375a1e98a8ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2gdrs" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.866853 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ee767cde-d698-4c01-b221-33c158999e60-console-oauth-config\") pod \"console-f9d7485db-9s8ps\" (UID: \"ee767cde-d698-4c01-b221-33c158999e60\") " pod="openshift-console/console-f9d7485db-9s8ps" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.866869 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1eb8e1ca-e0fa-448e-a502-a75ac460a5ca-machine-approver-tls\") pod \"machine-approver-56656f9798-cpfdd\" (UID: \"1eb8e1ca-e0fa-448e-a502-a75ac460a5ca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cpfdd" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.866902 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.866920 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1eb8e1ca-e0fa-448e-a502-a75ac460a5ca-auth-proxy-config\") pod \"machine-approver-56656f9798-cpfdd\" (UID: \"1eb8e1ca-e0fa-448e-a502-a75ac460a5ca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cpfdd" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.866893 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lkfnf"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.866934 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9195e8d-4ad5-406b-b2fe-1c107df51433-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mdqrt\" (UID: \"d9195e8d-4ad5-406b-b2fe-1c107df51433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.867447 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2aed137f-a24c-42c9-a0af-9ada3eeffa88-serving-cert\") pod \"console-operator-58897d9998-ggzn5\" (UID: \"2aed137f-a24c-42c9-a0af-9ada3eeffa88\") " pod="openshift-console-operator/console-operator-58897d9998-ggzn5" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.867447 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-trusted-ca-bundle\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.867481 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-node-pullsecrets\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.867523 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lkfnf" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.867534 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-audit\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.867566 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxbh6\" (UniqueName: \"kubernetes.io/projected/fd353a2b-c325-44b6-9e25-6a4c39213f9e-kube-api-access-lxbh6\") pod \"controller-manager-879f6c89f-htx2z\" (UID: \"fd353a2b-c325-44b6-9e25-6a4c39213f9e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-htx2z" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.867525 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-node-pullsecrets\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.867607 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2e2364cc-104f-4237-9ad5-c121a1c3fba6-audit-policies\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.867659 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86359667-d51a-449a-925d-375a1e98a8ca-config\") pod \"authentication-operator-69f744f599-2gdrs\" (UID: \"86359667-d51a-449a-925d-375a1e98a8ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2gdrs" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.867691 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee767cde-d698-4c01-b221-33c158999e60-trusted-ca-bundle\") pod \"console-f9d7485db-9s8ps\" (UID: \"ee767cde-d698-4c01-b221-33c158999e60\") " pod="openshift-console/console-f9d7485db-9s8ps" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.867722 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3eece420-e32c-4ea1-91f8-cc96bf144467-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-s8lmw\" (UID: \"3eece420-e32c-4ea1-91f8-cc96bf144467\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s8lmw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.867747 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6jpw\" (UniqueName: \"kubernetes.io/projected/8f02b797-c51d-4614-b1d6-b2b3294342d0-kube-api-access-n6jpw\") pod \"cluster-image-registry-operator-dc59b4c8b-ms4wp\" (UID: \"8f02b797-c51d-4614-b1d6-b2b3294342d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms4wp" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.867773 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd353a2b-c325-44b6-9e25-6a4c39213f9e-serving-cert\") pod \"controller-manager-879f6c89f-htx2z\" (UID: \"fd353a2b-c325-44b6-9e25-6a4c39213f9e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-htx2z" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.867800 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-config\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.867822 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ee767cde-d698-4c01-b221-33c158999e60-service-ca\") pod \"console-f9d7485db-9s8ps\" (UID: \"ee767cde-d698-4c01-b221-33c158999e60\") " pod="openshift-console/console-f9d7485db-9s8ps" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.867846 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.867870 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8990fd9-cdfb-4c98-80b3-794f1b371ee5-config\") pod \"route-controller-manager-6576b87f9c-sfzm5\" (UID: \"d8990fd9-cdfb-4c98-80b3-794f1b371ee5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sfzm5" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.867885 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.867894 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg4xk\" (UniqueName: \"kubernetes.io/projected/762dce0f-3636-41ab-8d78-1b69e0fa714a-kube-api-access-tg4xk\") pod \"cluster-samples-operator-665b6dd947-7pv69\" (UID: \"762dce0f-3636-41ab-8d78-1b69e0fa714a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7pv69" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.867919 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ee767cde-d698-4c01-b221-33c158999e60-console-config\") pod \"console-f9d7485db-9s8ps\" (UID: \"ee767cde-d698-4c01-b221-33c158999e60\") " pod="openshift-console/console-f9d7485db-9s8ps" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.867941 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86359667-d51a-449a-925d-375a1e98a8ca-service-ca-bundle\") pod \"authentication-operator-69f744f599-2gdrs\" (UID: \"86359667-d51a-449a-925d-375a1e98a8ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2gdrs" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.867989 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.868034 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gz5pm"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.868160 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-audit\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.868465 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gz5pm" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.868983 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.869511 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-config\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.869553 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-g8snw"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.870834 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-fz4nm"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.871392 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mx2mm"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.872511 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.872674 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sfzm5"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.879627 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-htx2z"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.880636 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.881519 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9s8ps"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.882380 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-s8lmw"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.884642 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbqzn"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.885225 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbqzn" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.885475 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-zjvxn"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.885798 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-zjvxn" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.886203 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rn6ml"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.887358 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ggzn5"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.887586 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.888483 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422815-gckmk"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.889102 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422815-gckmk" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.889451 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-69gng"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.889784 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-69gng" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.890463 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-dpmvq"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.891381 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7v6dg"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.892543 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-8s58v"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.893179 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-8s58v" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.893854 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25n7p"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.894421 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25n7p" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.896685 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z922x"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.897781 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z922x" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.902253 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lkfnf"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.904861 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9wlmb"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.909816 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rn6lb"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.911411 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.911881 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rn6lb" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.911573 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9wlmb" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.912773 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgsm"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.914043 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgsm" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.916933 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-tcsl7"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.917897 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tcsl7" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.919076 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-pks9r"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.919900 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pks9r" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.920086 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kbbjg"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.920710 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-kbbjg" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.921363 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-shthb"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.922845 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-km6qw"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.923845 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-km6qw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.924782 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jw2qz"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.926273 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms4wp"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.927581 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-69gng"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.928055 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.929073 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7jsv6"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.930444 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbqzn"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.933774 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-kpp45"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.935655 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-8s58v"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.937510 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7pv69"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.939043 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-d7pkc"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.940063 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-d7pkc" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.940628 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-tzqjc"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.941410 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-tzqjc" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.942294 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qc8tj"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.943869 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9wlmb"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.945110 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gz5pm"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.947156 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422815-gckmk"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.947572 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.948751 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9rk82"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.949998 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgsm"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.951788 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-2gdrs"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.953246 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25n7p"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.954704 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rn6lb"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.956312 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z922x"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.958560 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-tzqjc"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.961330 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-tcsl7"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.963743 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kbbjg"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.964631 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-km6qw"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.966515 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-d7pkc"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.967612 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-pks9r"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.968330 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.968470 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvxwm\" (UniqueName: \"kubernetes.io/projected/1eb8e1ca-e0fa-448e-a502-a75ac460a5ca-kube-api-access-wvxwm\") pod \"machine-approver-56656f9798-cpfdd\" (UID: \"1eb8e1ca-e0fa-448e-a502-a75ac460a5ca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cpfdd" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.968496 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2e2364cc-104f-4237-9ad5-c121a1c3fba6-audit-dir\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.968517 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b81b90f7-72e4-4a8d-881f-d117310a4a25-available-featuregates\") pod \"openshift-config-operator-7777fb866f-fz4nm\" (UID: \"b81b90f7-72e4-4a8d-881f-d117310a4a25\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz4nm" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.968533 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd353a2b-c325-44b6-9e25-6a4c39213f9e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-htx2z\" (UID: \"fd353a2b-c325-44b6-9e25-6a4c39213f9e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-htx2z" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.968548 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eece420-e32c-4ea1-91f8-cc96bf144467-config\") pod \"machine-api-operator-5694c8668f-s8lmw\" (UID: \"3eece420-e32c-4ea1-91f8-cc96bf144467\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s8lmw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.968564 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/481fe942-34c8-42d9-9925-a674da9ca453-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mx2mm\" (UID: \"481fe942-34c8-42d9-9925-a674da9ca453\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mx2mm" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.968583 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x488g\" (UniqueName: \"kubernetes.io/projected/86359667-d51a-449a-925d-375a1e98a8ca-kube-api-access-x488g\") pod \"authentication-operator-69f744f599-2gdrs\" (UID: \"86359667-d51a-449a-925d-375a1e98a8ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2gdrs" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.968607 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.968621 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd353a2b-c325-44b6-9e25-6a4c39213f9e-config\") pod \"controller-manager-879f6c89f-htx2z\" (UID: \"fd353a2b-c325-44b6-9e25-6a4c39213f9e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-htx2z" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.968635 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cm2v\" (UniqueName: \"kubernetes.io/projected/481fe942-34c8-42d9-9925-a674da9ca453-kube-api-access-6cm2v\") pod \"openshift-controller-manager-operator-756b6f6bc6-mx2mm\" (UID: \"481fe942-34c8-42d9-9925-a674da9ca453\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mx2mm" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.968652 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee767cde-d698-4c01-b221-33c158999e60-console-serving-cert\") pod \"console-f9d7485db-9s8ps\" (UID: \"ee767cde-d698-4c01-b221-33c158999e60\") " pod="openshift-console/console-f9d7485db-9s8ps" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.968669 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8990fd9-cdfb-4c98-80b3-794f1b371ee5-serving-cert\") pod \"route-controller-manager-6576b87f9c-sfzm5\" (UID: \"d8990fd9-cdfb-4c98-80b3-794f1b371ee5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sfzm5" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.968685 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d9195e8d-4ad5-406b-b2fe-1c107df51433-audit-policies\") pod \"apiserver-7bbb656c7d-mdqrt\" (UID: \"d9195e8d-4ad5-406b-b2fe-1c107df51433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.968700 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d9195e8d-4ad5-406b-b2fe-1c107df51433-encryption-config\") pod \"apiserver-7bbb656c7d-mdqrt\" (UID: \"d9195e8d-4ad5-406b-b2fe-1c107df51433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.968716 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86359667-d51a-449a-925d-375a1e98a8ca-serving-cert\") pod \"authentication-operator-69f744f599-2gdrs\" (UID: \"86359667-d51a-449a-925d-375a1e98a8ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2gdrs" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.968730 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8990fd9-cdfb-4c98-80b3-794f1b371ee5-client-ca\") pod \"route-controller-manager-6576b87f9c-sfzm5\" (UID: \"d8990fd9-cdfb-4c98-80b3-794f1b371ee5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sfzm5" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.968745 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ee767cde-d698-4c01-b221-33c158999e60-console-oauth-config\") pod \"console-f9d7485db-9s8ps\" (UID: \"ee767cde-d698-4c01-b221-33c158999e60\") " pod="openshift-console/console-f9d7485db-9s8ps" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.968762 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.968778 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1eb8e1ca-e0fa-448e-a502-a75ac460a5ca-auth-proxy-config\") pod \"machine-approver-56656f9798-cpfdd\" (UID: \"1eb8e1ca-e0fa-448e-a502-a75ac460a5ca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cpfdd" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.968792 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1eb8e1ca-e0fa-448e-a502-a75ac460a5ca-machine-approver-tls\") pod \"machine-approver-56656f9798-cpfdd\" (UID: \"1eb8e1ca-e0fa-448e-a502-a75ac460a5ca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cpfdd" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.968809 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9195e8d-4ad5-406b-b2fe-1c107df51433-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mdqrt\" (UID: \"d9195e8d-4ad5-406b-b2fe-1c107df51433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.968825 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2aed137f-a24c-42c9-a0af-9ada3eeffa88-serving-cert\") pod \"console-operator-58897d9998-ggzn5\" (UID: \"2aed137f-a24c-42c9-a0af-9ada3eeffa88\") " pod="openshift-console-operator/console-operator-58897d9998-ggzn5" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.968841 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2e2364cc-104f-4237-9ad5-c121a1c3fba6-audit-policies\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.968857 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxbh6\" (UniqueName: \"kubernetes.io/projected/fd353a2b-c325-44b6-9e25-6a4c39213f9e-kube-api-access-lxbh6\") pod \"controller-manager-879f6c89f-htx2z\" (UID: \"fd353a2b-c325-44b6-9e25-6a4c39213f9e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-htx2z" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.968879 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee767cde-d698-4c01-b221-33c158999e60-trusted-ca-bundle\") pod \"console-f9d7485db-9s8ps\" (UID: \"ee767cde-d698-4c01-b221-33c158999e60\") " pod="openshift-console/console-f9d7485db-9s8ps" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.968895 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3eece420-e32c-4ea1-91f8-cc96bf144467-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-s8lmw\" (UID: \"3eece420-e32c-4ea1-91f8-cc96bf144467\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s8lmw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.968911 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6jpw\" (UniqueName: \"kubernetes.io/projected/8f02b797-c51d-4614-b1d6-b2b3294342d0-kube-api-access-n6jpw\") pod \"cluster-image-registry-operator-dc59b4c8b-ms4wp\" (UID: \"8f02b797-c51d-4614-b1d6-b2b3294342d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms4wp" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.968927 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86359667-d51a-449a-925d-375a1e98a8ca-config\") pod \"authentication-operator-69f744f599-2gdrs\" (UID: \"86359667-d51a-449a-925d-375a1e98a8ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2gdrs" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.968942 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ee767cde-d698-4c01-b221-33c158999e60-service-ca\") pod \"console-f9d7485db-9s8ps\" (UID: \"ee767cde-d698-4c01-b221-33c158999e60\") " pod="openshift-console/console-f9d7485db-9s8ps" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.968957 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd353a2b-c325-44b6-9e25-6a4c39213f9e-serving-cert\") pod \"controller-manager-879f6c89f-htx2z\" (UID: \"fd353a2b-c325-44b6-9e25-6a4c39213f9e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-htx2z" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.968995 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8990fd9-cdfb-4c98-80b3-794f1b371ee5-config\") pod \"route-controller-manager-6576b87f9c-sfzm5\" (UID: \"d8990fd9-cdfb-4c98-80b3-794f1b371ee5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sfzm5" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969012 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg4xk\" (UniqueName: \"kubernetes.io/projected/762dce0f-3636-41ab-8d78-1b69e0fa714a-kube-api-access-tg4xk\") pod \"cluster-samples-operator-665b6dd947-7pv69\" (UID: \"762dce0f-3636-41ab-8d78-1b69e0fa714a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7pv69" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969030 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ee767cde-d698-4c01-b221-33c158999e60-console-config\") pod \"console-f9d7485db-9s8ps\" (UID: \"ee767cde-d698-4c01-b221-33c158999e60\") " pod="openshift-console/console-f9d7485db-9s8ps" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969046 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969063 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86359667-d51a-449a-925d-375a1e98a8ca-service-ca-bundle\") pod \"authentication-operator-69f744f599-2gdrs\" (UID: \"86359667-d51a-449a-925d-375a1e98a8ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2gdrs" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969079 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969094 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d9195e8d-4ad5-406b-b2fe-1c107df51433-audit-dir\") pod \"apiserver-7bbb656c7d-mdqrt\" (UID: \"d9195e8d-4ad5-406b-b2fe-1c107df51433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969108 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f02b797-c51d-4614-b1d6-b2b3294342d0-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ms4wp\" (UID: \"8f02b797-c51d-4614-b1d6-b2b3294342d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms4wp" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969124 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f02b797-c51d-4614-b1d6-b2b3294342d0-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ms4wp\" (UID: \"8f02b797-c51d-4614-b1d6-b2b3294342d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms4wp" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969147 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969160 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eb8e1ca-e0fa-448e-a502-a75ac460a5ca-config\") pod \"machine-approver-56656f9798-cpfdd\" (UID: \"1eb8e1ca-e0fa-448e-a502-a75ac460a5ca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cpfdd" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969177 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvwvd\" (UniqueName: \"kubernetes.io/projected/d9195e8d-4ad5-406b-b2fe-1c107df51433-kube-api-access-bvwvd\") pod \"apiserver-7bbb656c7d-mdqrt\" (UID: \"d9195e8d-4ad5-406b-b2fe-1c107df51433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969192 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a3f8229-cf04-4e18-8dc3-c9a63c4b46b4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rn6ml\" (UID: \"9a3f8229-cf04-4e18-8dc3-c9a63c4b46b4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rn6ml" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969208 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d9195e8d-4ad5-406b-b2fe-1c107df51433-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mdqrt\" (UID: \"d9195e8d-4ad5-406b-b2fe-1c107df51433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969223 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hvcm\" (UniqueName: \"kubernetes.io/projected/ee767cde-d698-4c01-b221-33c158999e60-kube-api-access-4hvcm\") pod \"console-f9d7485db-9s8ps\" (UID: \"ee767cde-d698-4c01-b221-33c158999e60\") " pod="openshift-console/console-f9d7485db-9s8ps" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969238 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b81b90f7-72e4-4a8d-881f-d117310a4a25-serving-cert\") pod \"openshift-config-operator-7777fb866f-fz4nm\" (UID: \"b81b90f7-72e4-4a8d-881f-d117310a4a25\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz4nm" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969253 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd353a2b-c325-44b6-9e25-6a4c39213f9e-client-ca\") pod \"controller-manager-879f6c89f-htx2z\" (UID: \"fd353a2b-c325-44b6-9e25-6a4c39213f9e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-htx2z" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969274 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a3f8229-cf04-4e18-8dc3-c9a63c4b46b4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rn6ml\" (UID: \"9a3f8229-cf04-4e18-8dc3-c9a63c4b46b4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rn6ml" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969288 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ee767cde-d698-4c01-b221-33c158999e60-oauth-serving-cert\") pod \"console-f9d7485db-9s8ps\" (UID: \"ee767cde-d698-4c01-b221-33c158999e60\") " pod="openshift-console/console-f9d7485db-9s8ps" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969305 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86359667-d51a-449a-925d-375a1e98a8ca-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-2gdrs\" (UID: \"86359667-d51a-449a-925d-375a1e98a8ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2gdrs" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969322 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969336 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmpvl\" (UniqueName: \"kubernetes.io/projected/cd45363e-de7e-4e91-afe1-f82948764f4f-kube-api-access-tmpvl\") pod \"downloads-7954f5f757-kpp45\" (UID: \"cd45363e-de7e-4e91-afe1-f82948764f4f\") " pod="openshift-console/downloads-7954f5f757-kpp45" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969365 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t6xj\" (UniqueName: \"kubernetes.io/projected/3eece420-e32c-4ea1-91f8-cc96bf144467-kube-api-access-6t6xj\") pod \"machine-api-operator-5694c8668f-s8lmw\" (UID: \"3eece420-e32c-4ea1-91f8-cc96bf144467\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s8lmw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969366 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eece420-e32c-4ea1-91f8-cc96bf144467-config\") pod \"machine-api-operator-5694c8668f-s8lmw\" (UID: \"3eece420-e32c-4ea1-91f8-cc96bf144467\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s8lmw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969387 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/481fe942-34c8-42d9-9925-a674da9ca453-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mx2mm\" (UID: \"481fe942-34c8-42d9-9925-a674da9ca453\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mx2mm" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969414 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w5pl\" (UniqueName: \"kubernetes.io/projected/b81b90f7-72e4-4a8d-881f-d117310a4a25-kube-api-access-5w5pl\") pod \"openshift-config-operator-7777fb866f-fz4nm\" (UID: \"b81b90f7-72e4-4a8d-881f-d117310a4a25\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz4nm" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969424 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2e2364cc-104f-4237-9ad5-c121a1c3fba6-audit-dir\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969438 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969462 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5pth\" (UniqueName: \"kubernetes.io/projected/2aed137f-a24c-42c9-a0af-9ada3eeffa88-kube-api-access-t5pth\") pod \"console-operator-58897d9998-ggzn5\" (UID: \"2aed137f-a24c-42c9-a0af-9ada3eeffa88\") " pod="openshift-console-operator/console-operator-58897d9998-ggzn5" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969483 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3eece420-e32c-4ea1-91f8-cc96bf144467-images\") pod \"machine-api-operator-5694c8668f-s8lmw\" (UID: \"3eece420-e32c-4ea1-91f8-cc96bf144467\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s8lmw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969500 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2aed137f-a24c-42c9-a0af-9ada3eeffa88-trusted-ca\") pod \"console-operator-58897d9998-ggzn5\" (UID: \"2aed137f-a24c-42c9-a0af-9ada3eeffa88\") " pod="openshift-console-operator/console-operator-58897d9998-ggzn5" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969524 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969540 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f02b797-c51d-4614-b1d6-b2b3294342d0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ms4wp\" (UID: \"8f02b797-c51d-4614-b1d6-b2b3294342d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms4wp" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969557 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969574 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf4w7\" (UniqueName: \"kubernetes.io/projected/2e2364cc-104f-4237-9ad5-c121a1c3fba6-kube-api-access-qf4w7\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969590 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxwpq\" (UniqueName: \"kubernetes.io/projected/d8990fd9-cdfb-4c98-80b3-794f1b371ee5-kube-api-access-kxwpq\") pod \"route-controller-manager-6576b87f9c-sfzm5\" (UID: \"d8990fd9-cdfb-4c98-80b3-794f1b371ee5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sfzm5" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969606 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/762dce0f-3636-41ab-8d78-1b69e0fa714a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7pv69\" (UID: \"762dce0f-3636-41ab-8d78-1b69e0fa714a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7pv69" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969626 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7dbh\" (UniqueName: \"kubernetes.io/projected/9a3f8229-cf04-4e18-8dc3-c9a63c4b46b4-kube-api-access-b7dbh\") pod \"openshift-apiserver-operator-796bbdcf4f-rn6ml\" (UID: \"9a3f8229-cf04-4e18-8dc3-c9a63c4b46b4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rn6ml" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969642 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969657 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969671 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d9195e8d-4ad5-406b-b2fe-1c107df51433-etcd-client\") pod \"apiserver-7bbb656c7d-mdqrt\" (UID: \"d9195e8d-4ad5-406b-b2fe-1c107df51433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969686 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9195e8d-4ad5-406b-b2fe-1c107df51433-serving-cert\") pod \"apiserver-7bbb656c7d-mdqrt\" (UID: \"d9195e8d-4ad5-406b-b2fe-1c107df51433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969703 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2aed137f-a24c-42c9-a0af-9ada3eeffa88-config\") pod \"console-operator-58897d9998-ggzn5\" (UID: \"2aed137f-a24c-42c9-a0af-9ada3eeffa88\") " pod="openshift-console-operator/console-operator-58897d9998-ggzn5" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.969814 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b81b90f7-72e4-4a8d-881f-d117310a4a25-available-featuregates\") pod \"openshift-config-operator-7777fb866f-fz4nm\" (UID: \"b81b90f7-72e4-4a8d-881f-d117310a4a25\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz4nm" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.970432 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2aed137f-a24c-42c9-a0af-9ada3eeffa88-config\") pod \"console-operator-58897d9998-ggzn5\" (UID: \"2aed137f-a24c-42c9-a0af-9ada3eeffa88\") " pod="openshift-console-operator/console-operator-58897d9998-ggzn5" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.970716 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/481fe942-34c8-42d9-9925-a674da9ca453-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mx2mm\" (UID: \"481fe942-34c8-42d9-9925-a674da9ca453\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mx2mm" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.970724 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd353a2b-c325-44b6-9e25-6a4c39213f9e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-htx2z\" (UID: \"fd353a2b-c325-44b6-9e25-6a4c39213f9e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-htx2z" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.970993 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-2jzgr"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.971788 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86359667-d51a-449a-925d-375a1e98a8ca-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-2gdrs\" (UID: \"86359667-d51a-449a-925d-375a1e98a8ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2gdrs" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.972345 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86359667-d51a-449a-925d-375a1e98a8ca-service-ca-bundle\") pod \"authentication-operator-69f744f599-2gdrs\" (UID: \"86359667-d51a-449a-925d-375a1e98a8ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2gdrs" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.972410 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-2jzgr"] Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.972410 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eb8e1ca-e0fa-448e-a502-a75ac460a5ca-config\") pod \"machine-approver-56656f9798-cpfdd\" (UID: \"1eb8e1ca-e0fa-448e-a502-a75ac460a5ca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cpfdd" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.972512 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2jzgr" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.973066 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d9195e8d-4ad5-406b-b2fe-1c107df51433-audit-dir\") pod \"apiserver-7bbb656c7d-mdqrt\" (UID: \"d9195e8d-4ad5-406b-b2fe-1c107df51433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.973772 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d9195e8d-4ad5-406b-b2fe-1c107df51433-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mdqrt\" (UID: \"d9195e8d-4ad5-406b-b2fe-1c107df51433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.974151 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ee767cde-d698-4c01-b221-33c158999e60-console-config\") pod \"console-f9d7485db-9s8ps\" (UID: \"ee767cde-d698-4c01-b221-33c158999e60\") " pod="openshift-console/console-f9d7485db-9s8ps" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.974813 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f02b797-c51d-4614-b1d6-b2b3294342d0-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ms4wp\" (UID: \"8f02b797-c51d-4614-b1d6-b2b3294342d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms4wp" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.974824 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd353a2b-c325-44b6-9e25-6a4c39213f9e-serving-cert\") pod \"controller-manager-879f6c89f-htx2z\" (UID: \"fd353a2b-c325-44b6-9e25-6a4c39213f9e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-htx2z" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.974853 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a3f8229-cf04-4e18-8dc3-c9a63c4b46b4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rn6ml\" (UID: \"9a3f8229-cf04-4e18-8dc3-c9a63c4b46b4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rn6ml" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.975246 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.975276 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd353a2b-c325-44b6-9e25-6a4c39213f9e-config\") pod \"controller-manager-879f6c89f-htx2z\" (UID: \"fd353a2b-c325-44b6-9e25-6a4c39213f9e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-htx2z" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.975604 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.976212 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8990fd9-cdfb-4c98-80b3-794f1b371ee5-client-ca\") pod \"route-controller-manager-6576b87f9c-sfzm5\" (UID: \"d8990fd9-cdfb-4c98-80b3-794f1b371ee5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sfzm5" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.976598 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee767cde-d698-4c01-b221-33c158999e60-console-serving-cert\") pod \"console-f9d7485db-9s8ps\" (UID: \"ee767cde-d698-4c01-b221-33c158999e60\") " pod="openshift-console/console-f9d7485db-9s8ps" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.976729 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd353a2b-c325-44b6-9e25-6a4c39213f9e-client-ca\") pod \"controller-manager-879f6c89f-htx2z\" (UID: \"fd353a2b-c325-44b6-9e25-6a4c39213f9e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-htx2z" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.976802 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1eb8e1ca-e0fa-448e-a502-a75ac460a5ca-auth-proxy-config\") pod \"machine-approver-56656f9798-cpfdd\" (UID: \"1eb8e1ca-e0fa-448e-a502-a75ac460a5ca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cpfdd" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.976833 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3eece420-e32c-4ea1-91f8-cc96bf144467-images\") pod \"machine-api-operator-5694c8668f-s8lmw\" (UID: \"3eece420-e32c-4ea1-91f8-cc96bf144467\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s8lmw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.978259 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/762dce0f-3636-41ab-8d78-1b69e0fa714a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7pv69\" (UID: \"762dce0f-3636-41ab-8d78-1b69e0fa714a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7pv69" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.978338 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ee767cde-d698-4c01-b221-33c158999e60-oauth-serving-cert\") pod \"console-f9d7485db-9s8ps\" (UID: \"ee767cde-d698-4c01-b221-33c158999e60\") " pod="openshift-console/console-f9d7485db-9s8ps" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.978854 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2aed137f-a24c-42c9-a0af-9ada3eeffa88-trusted-ca\") pod \"console-operator-58897d9998-ggzn5\" (UID: \"2aed137f-a24c-42c9-a0af-9ada3eeffa88\") " pod="openshift-console-operator/console-operator-58897d9998-ggzn5" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.979387 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1eb8e1ca-e0fa-448e-a502-a75ac460a5ca-machine-approver-tls\") pod \"machine-approver-56656f9798-cpfdd\" (UID: \"1eb8e1ca-e0fa-448e-a502-a75ac460a5ca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cpfdd" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.979653 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.980015 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.980128 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8990fd9-cdfb-4c98-80b3-794f1b371ee5-config\") pod \"route-controller-manager-6576b87f9c-sfzm5\" (UID: \"d8990fd9-cdfb-4c98-80b3-794f1b371ee5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sfzm5" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.980397 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9195e8d-4ad5-406b-b2fe-1c107df51433-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mdqrt\" (UID: \"d9195e8d-4ad5-406b-b2fe-1c107df51433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.980579 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86359667-d51a-449a-925d-375a1e98a8ca-serving-cert\") pod \"authentication-operator-69f744f599-2gdrs\" (UID: \"86359667-d51a-449a-925d-375a1e98a8ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2gdrs" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.980746 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2e2364cc-104f-4237-9ad5-c121a1c3fba6-audit-policies\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.980959 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/481fe942-34c8-42d9-9925-a674da9ca453-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mx2mm\" (UID: \"481fe942-34c8-42d9-9925-a674da9ca453\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mx2mm" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.981401 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d9195e8d-4ad5-406b-b2fe-1c107df51433-audit-policies\") pod \"apiserver-7bbb656c7d-mdqrt\" (UID: \"d9195e8d-4ad5-406b-b2fe-1c107df51433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.981519 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee767cde-d698-4c01-b221-33c158999e60-trusted-ca-bundle\") pod \"console-f9d7485db-9s8ps\" (UID: \"ee767cde-d698-4c01-b221-33c158999e60\") " pod="openshift-console/console-f9d7485db-9s8ps" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.981598 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2aed137f-a24c-42c9-a0af-9ada3eeffa88-serving-cert\") pod \"console-operator-58897d9998-ggzn5\" (UID: \"2aed137f-a24c-42c9-a0af-9ada3eeffa88\") " pod="openshift-console-operator/console-operator-58897d9998-ggzn5" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.981780 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86359667-d51a-449a-925d-375a1e98a8ca-config\") pod \"authentication-operator-69f744f599-2gdrs\" (UID: \"86359667-d51a-449a-925d-375a1e98a8ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2gdrs" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.981810 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9195e8d-4ad5-406b-b2fe-1c107df51433-serving-cert\") pod \"apiserver-7bbb656c7d-mdqrt\" (UID: \"d9195e8d-4ad5-406b-b2fe-1c107df51433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.981924 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b81b90f7-72e4-4a8d-881f-d117310a4a25-serving-cert\") pod \"openshift-config-operator-7777fb866f-fz4nm\" (UID: \"b81b90f7-72e4-4a8d-881f-d117310a4a25\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz4nm" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.982794 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ee767cde-d698-4c01-b221-33c158999e60-console-oauth-config\") pod \"console-f9d7485db-9s8ps\" (UID: \"ee767cde-d698-4c01-b221-33c158999e60\") " pod="openshift-console/console-f9d7485db-9s8ps" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.982794 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.983322 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f02b797-c51d-4614-b1d6-b2b3294342d0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ms4wp\" (UID: \"8f02b797-c51d-4614-b1d6-b2b3294342d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms4wp" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.983786 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d9195e8d-4ad5-406b-b2fe-1c107df51433-encryption-config\") pod \"apiserver-7bbb656c7d-mdqrt\" (UID: \"d9195e8d-4ad5-406b-b2fe-1c107df51433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.983890 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a3f8229-cf04-4e18-8dc3-c9a63c4b46b4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rn6ml\" (UID: \"9a3f8229-cf04-4e18-8dc3-c9a63c4b46b4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rn6ml" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.984113 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.984293 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.984454 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.984618 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.984730 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d9195e8d-4ad5-406b-b2fe-1c107df51433-etcd-client\") pod \"apiserver-7bbb656c7d-mdqrt\" (UID: \"d9195e8d-4ad5-406b-b2fe-1c107df51433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.984938 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8990fd9-cdfb-4c98-80b3-794f1b371ee5-serving-cert\") pod \"route-controller-manager-6576b87f9c-sfzm5\" (UID: \"d8990fd9-cdfb-4c98-80b3-794f1b371ee5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sfzm5" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.986025 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.987536 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3eece420-e32c-4ea1-91f8-cc96bf144467-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-s8lmw\" (UID: \"3eece420-e32c-4ea1-91f8-cc96bf144467\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s8lmw" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.987913 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 10 12:17:54 crc kubenswrapper[4689]: I1210 12:17:54.988269 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.017281 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.048285 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.050743 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ee767cde-d698-4c01-b221-33c158999e60-service-ca\") pod \"console-f9d7485db-9s8ps\" (UID: \"ee767cde-d698-4c01-b221-33c158999e60\") " pod="openshift-console/console-f9d7485db-9s8ps" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.068048 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.088257 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.107822 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.128389 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.149783 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.168481 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.188441 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.216788 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.228508 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.248178 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.268304 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.288048 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.308736 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.350227 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.356390 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7qvx\" (UniqueName: \"kubernetes.io/projected/7dbd6f55-bc16-49a6-b385-1ba0b50003cd-kube-api-access-p7qvx\") pod \"apiserver-76f77b778f-g8snw\" (UID: \"7dbd6f55-bc16-49a6-b385-1ba0b50003cd\") " pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.369201 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.408790 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.429411 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.449351 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.469481 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.489411 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.497514 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.497613 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.497721 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.509564 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.529169 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.548388 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.568357 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.589287 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.608587 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.628614 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.643144 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.648192 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.668444 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.689930 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.709033 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.728725 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.749301 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.769779 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.789596 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.811492 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.828796 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.848362 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.868711 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.889896 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.909134 4689 request.go:700] Waited for 1.015779585s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ac-dockercfg-9lkdf&limit=500&resourceVersion=0 Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.911136 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.928817 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.948553 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.969194 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.975409 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-g8snw"] Dec 10 12:17:55 crc kubenswrapper[4689]: W1210 12:17:55.988169 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dbd6f55_bc16_49a6_b385_1ba0b50003cd.slice/crio-b336e39b22baae35f0d5d1ceccac2a946bedaa2e8159f6328c5f4abb81ac95d8 WatchSource:0}: Error finding container b336e39b22baae35f0d5d1ceccac2a946bedaa2e8159f6328c5f4abb81ac95d8: Status 404 returned error can't find the container with id b336e39b22baae35f0d5d1ceccac2a946bedaa2e8159f6328c5f4abb81ac95d8 Dec 10 12:17:55 crc kubenswrapper[4689]: I1210 12:17:55.988936 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.009325 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.028453 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.048716 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.069570 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.089387 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.109362 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.129485 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.165826 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.168541 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.189677 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.208153 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.228879 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.248692 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.269513 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.288568 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.308667 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.329146 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.348396 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.369636 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.388416 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.408831 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.428527 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.450268 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.468889 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.489190 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.508369 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.528796 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.548941 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.568126 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.589594 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g8snw" event={"ID":"7dbd6f55-bc16-49a6-b385-1ba0b50003cd","Type":"ContainerStarted","Data":"b336e39b22baae35f0d5d1ceccac2a946bedaa2e8159f6328c5f4abb81ac95d8"} Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.589799 4689 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.608622 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.628530 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.649520 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.669475 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.716269 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvxwm\" (UniqueName: \"kubernetes.io/projected/1eb8e1ca-e0fa-448e-a502-a75ac460a5ca-kube-api-access-wvxwm\") pod \"machine-approver-56656f9798-cpfdd\" (UID: \"1eb8e1ca-e0fa-448e-a502-a75ac460a5ca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cpfdd" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.736560 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x488g\" (UniqueName: \"kubernetes.io/projected/86359667-d51a-449a-925d-375a1e98a8ca-kube-api-access-x488g\") pod \"authentication-operator-69f744f599-2gdrs\" (UID: \"86359667-d51a-449a-925d-375a1e98a8ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2gdrs" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.745773 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvwvd\" (UniqueName: \"kubernetes.io/projected/d9195e8d-4ad5-406b-b2fe-1c107df51433-kube-api-access-bvwvd\") pod \"apiserver-7bbb656c7d-mdqrt\" (UID: \"d9195e8d-4ad5-406b-b2fe-1c107df51433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.765859 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmpvl\" (UniqueName: \"kubernetes.io/projected/cd45363e-de7e-4e91-afe1-f82948764f4f-kube-api-access-tmpvl\") pod \"downloads-7954f5f757-kpp45\" (UID: \"cd45363e-de7e-4e91-afe1-f82948764f4f\") " pod="openshift-console/downloads-7954f5f757-kpp45" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.794497 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg4xk\" (UniqueName: \"kubernetes.io/projected/762dce0f-3636-41ab-8d78-1b69e0fa714a-kube-api-access-tg4xk\") pod \"cluster-samples-operator-665b6dd947-7pv69\" (UID: \"762dce0f-3636-41ab-8d78-1b69e0fa714a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7pv69" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.808689 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.817477 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cm2v\" (UniqueName: \"kubernetes.io/projected/481fe942-34c8-42d9-9925-a674da9ca453-kube-api-access-6cm2v\") pod \"openshift-controller-manager-operator-756b6f6bc6-mx2mm\" (UID: \"481fe942-34c8-42d9-9925-a674da9ca453\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mx2mm" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.828885 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.848791 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.885033 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cpfdd" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.889043 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.895279 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f02b797-c51d-4614-b1d6-b2b3294342d0-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ms4wp\" (UID: \"8f02b797-c51d-4614-b1d6-b2b3294342d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms4wp" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.926298 4689 request.go:700] Waited for 1.950742315s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/machine-api-operator/token Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.941026 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hvcm\" (UniqueName: \"kubernetes.io/projected/ee767cde-d698-4c01-b221-33c158999e60-kube-api-access-4hvcm\") pod \"console-f9d7485db-9s8ps\" (UID: \"ee767cde-d698-4c01-b221-33c158999e60\") " pod="openshift-console/console-f9d7485db-9s8ps" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.948933 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t6xj\" (UniqueName: \"kubernetes.io/projected/3eece420-e32c-4ea1-91f8-cc96bf144467-kube-api-access-6t6xj\") pod \"machine-api-operator-5694c8668f-s8lmw\" (UID: \"3eece420-e32c-4ea1-91f8-cc96bf144467\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s8lmw" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.966844 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf4w7\" (UniqueName: \"kubernetes.io/projected/2e2364cc-104f-4237-9ad5-c121a1c3fba6-kube-api-access-qf4w7\") pod \"oauth-openshift-558db77b4-7jsv6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.979456 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.979838 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-s8lmw" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.988667 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5pth\" (UniqueName: \"kubernetes.io/projected/2aed137f-a24c-42c9-a0af-9ada3eeffa88-kube-api-access-t5pth\") pod \"console-operator-58897d9998-ggzn5\" (UID: \"2aed137f-a24c-42c9-a0af-9ada3eeffa88\") " pod="openshift-console-operator/console-operator-58897d9998-ggzn5" Dec 10 12:17:56 crc kubenswrapper[4689]: I1210 12:17:56.988892 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-2gdrs" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.001029 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.017132 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxwpq\" (UniqueName: \"kubernetes.io/projected/d8990fd9-cdfb-4c98-80b3-794f1b371ee5-kube-api-access-kxwpq\") pod \"route-controller-manager-6576b87f9c-sfzm5\" (UID: \"d8990fd9-cdfb-4c98-80b3-794f1b371ee5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sfzm5" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.019624 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7pv69" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.027939 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w5pl\" (UniqueName: \"kubernetes.io/projected/b81b90f7-72e4-4a8d-881f-d117310a4a25-kube-api-access-5w5pl\") pod \"openshift-config-operator-7777fb866f-fz4nm\" (UID: \"b81b90f7-72e4-4a8d-881f-d117310a4a25\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz4nm" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.034423 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-ggzn5" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.045269 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mx2mm" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.052211 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7dbh\" (UniqueName: \"kubernetes.io/projected/9a3f8229-cf04-4e18-8dc3-c9a63c4b46b4-kube-api-access-b7dbh\") pod \"openshift-apiserver-operator-796bbdcf4f-rn6ml\" (UID: \"9a3f8229-cf04-4e18-8dc3-c9a63c4b46b4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rn6ml" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.054104 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-kpp45" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.060468 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9s8ps" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.069429 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxbh6\" (UniqueName: \"kubernetes.io/projected/fd353a2b-c325-44b6-9e25-6a4c39213f9e-kube-api-access-lxbh6\") pod \"controller-manager-879f6c89f-htx2z\" (UID: \"fd353a2b-c325-44b6-9e25-6a4c39213f9e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-htx2z" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.095317 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6jpw\" (UniqueName: \"kubernetes.io/projected/8f02b797-c51d-4614-b1d6-b2b3294342d0-kube-api-access-n6jpw\") pod \"cluster-image-registry-operator-dc59b4c8b-ms4wp\" (UID: \"8f02b797-c51d-4614-b1d6-b2b3294342d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms4wp" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.096150 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxtnw\" (UniqueName: \"kubernetes.io/projected/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-kube-api-access-pxtnw\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.096218 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27f4e33e-8aae-44e8-918c-f25bf7eb9e37-serving-cert\") pod \"etcd-operator-b45778765-shthb\" (UID: \"27f4e33e-8aae-44e8-918c-f25bf7eb9e37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-shthb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.096270 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/27f4e33e-8aae-44e8-918c-f25bf7eb9e37-etcd-client\") pod \"etcd-operator-b45778765-shthb\" (UID: \"27f4e33e-8aae-44e8-918c-f25bf7eb9e37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-shthb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.096341 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/27f4e33e-8aae-44e8-918c-f25bf7eb9e37-etcd-service-ca\") pod \"etcd-operator-b45778765-shthb\" (UID: \"27f4e33e-8aae-44e8-918c-f25bf7eb9e37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-shthb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.096382 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.096415 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsjgq\" (UniqueName: \"kubernetes.io/projected/27f4e33e-8aae-44e8-918c-f25bf7eb9e37-kube-api-access-lsjgq\") pod \"etcd-operator-b45778765-shthb\" (UID: \"27f4e33e-8aae-44e8-918c-f25bf7eb9e37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-shthb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.096476 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.096525 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27f4e33e-8aae-44e8-918c-f25bf7eb9e37-config\") pod \"etcd-operator-b45778765-shthb\" (UID: \"27f4e33e-8aae-44e8-918c-f25bf7eb9e37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-shthb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.096568 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-registry-certificates\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.096599 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.096632 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/27f4e33e-8aae-44e8-918c-f25bf7eb9e37-etcd-ca\") pod \"etcd-operator-b45778765-shthb\" (UID: \"27f4e33e-8aae-44e8-918c-f25bf7eb9e37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-shthb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.096684 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-bound-sa-token\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.096717 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-trusted-ca\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.096759 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-registry-tls\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:57 crc kubenswrapper[4689]: E1210 12:17:57.097292 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:17:57.597271245 +0000 UTC m=+145.385352423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.128410 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.152793 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.170847 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.187856 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.197420 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:17:57 crc kubenswrapper[4689]: E1210 12:17:57.197552 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:17:57.697528529 +0000 UTC m=+145.485609697 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.197847 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/abad9838-aab6-4f7c-a042-83924f9e0809-plugins-dir\") pod \"csi-hostpathplugin-d7pkc\" (UID: \"abad9838-aab6-4f7c-a042-83924f9e0809\") " pod="hostpath-provisioner/csi-hostpathplugin-d7pkc" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.197886 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/265037d4-b03f-4d1b-a643-5b5eb4e59738-stats-auth\") pod \"router-default-5444994796-xmwb5\" (UID: \"265037d4-b03f-4d1b-a643-5b5eb4e59738\") " pod="openshift-ingress/router-default-5444994796-xmwb5" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.197909 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/cca4e9b1-a2a3-4d9b-a73c-e6d7d4552e97-node-bootstrap-token\") pod \"machine-config-server-zjvxn\" (UID: \"cca4e9b1-a2a3-4d9b-a73c-e6d7d4552e97\") " pod="openshift-machine-config-operator/machine-config-server-zjvxn" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.197934 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8ea56da0-e5ef-406e-bff9-b8652666745a-srv-cert\") pod \"olm-operator-6b444d44fb-69gng\" (UID: \"8ea56da0-e5ef-406e-bff9-b8652666745a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-69gng" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.197955 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1a410bee-f5f8-4ab2-9750-3afed0e312b2-metrics-tls\") pod \"ingress-operator-5b745b69d9-dpmvq\" (UID: \"1a410bee-f5f8-4ab2-9750-3afed0e312b2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dpmvq" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.197996 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79v8n\" (UniqueName: \"kubernetes.io/projected/1a410bee-f5f8-4ab2-9750-3afed0e312b2-kube-api-access-79v8n\") pod \"ingress-operator-5b745b69d9-dpmvq\" (UID: \"1a410bee-f5f8-4ab2-9750-3afed0e312b2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dpmvq" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.198021 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c541a72-1484-41a6-b251-748306e1068d-metrics-tls\") pod \"dns-default-tzqjc\" (UID: \"5c541a72-1484-41a6-b251-748306e1068d\") " pod="openshift-dns/dns-default-tzqjc" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.198042 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/56120454-e6eb-4d76-92ee-b8083c027c12-proxy-tls\") pod \"machine-config-operator-74547568cd-gz5pm\" (UID: \"56120454-e6eb-4d76-92ee-b8083c027c12\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gz5pm" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.198087 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7fe8224-0ecc-4289-9a5d-107f9d6d00b5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lkfnf\" (UID: \"c7fe8224-0ecc-4289-9a5d-107f9d6d00b5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lkfnf" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.198107 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5dca2df7-7c4c-40bb-8302-d6c089fd5486-config-volume\") pod \"collect-profiles-29422815-gckmk\" (UID: \"5dca2df7-7c4c-40bb-8302-d6c089fd5486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422815-gckmk" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.198129 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1a410bee-f5f8-4ab2-9750-3afed0e312b2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-dpmvq\" (UID: \"1a410bee-f5f8-4ab2-9750-3afed0e312b2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dpmvq" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.198702 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxtnw\" (UniqueName: \"kubernetes.io/projected/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-kube-api-access-pxtnw\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.198745 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e9883663-c0ff-4454-ba8f-caed05881365-metrics-tls\") pod \"dns-operator-744455d44c-qc8tj\" (UID: \"e9883663-c0ff-4454-ba8f-caed05881365\") " pod="openshift-dns-operator/dns-operator-744455d44c-qc8tj" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.199342 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/27f4e33e-8aae-44e8-918c-f25bf7eb9e37-etcd-client\") pod \"etcd-operator-b45778765-shthb\" (UID: \"27f4e33e-8aae-44e8-918c-f25bf7eb9e37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-shthb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.199649 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czsbh\" (UniqueName: \"kubernetes.io/projected/5dca2df7-7c4c-40bb-8302-d6c089fd5486-kube-api-access-czsbh\") pod \"collect-profiles-29422815-gckmk\" (UID: \"5dca2df7-7c4c-40bb-8302-d6c089fd5486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422815-gckmk" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.199714 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27f4e33e-8aae-44e8-918c-f25bf7eb9e37-serving-cert\") pod \"etcd-operator-b45778765-shthb\" (UID: \"27f4e33e-8aae-44e8-918c-f25bf7eb9e37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-shthb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.199737 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/265037d4-b03f-4d1b-a643-5b5eb4e59738-metrics-certs\") pod \"router-default-5444994796-xmwb5\" (UID: \"265037d4-b03f-4d1b-a643-5b5eb4e59738\") " pod="openshift-ingress/router-default-5444994796-xmwb5" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.199760 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvc9n\" (UniqueName: \"kubernetes.io/projected/e9883663-c0ff-4454-ba8f-caed05881365-kube-api-access-pvc9n\") pod \"dns-operator-744455d44c-qc8tj\" (UID: \"e9883663-c0ff-4454-ba8f-caed05881365\") " pod="openshift-dns-operator/dns-operator-744455d44c-qc8tj" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.201028 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txwzz\" (UniqueName: \"kubernetes.io/projected/265037d4-b03f-4d1b-a643-5b5eb4e59738-kube-api-access-txwzz\") pod \"router-default-5444994796-xmwb5\" (UID: \"265037d4-b03f-4d1b-a643-5b5eb4e59738\") " pod="openshift-ingress/router-default-5444994796-xmwb5" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.201084 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7a4e09c2-7d4d-4078-b9a9-353f4c8063a4-proxy-tls\") pod \"machine-config-controller-84d6567774-9rk82\" (UID: \"7a4e09c2-7d4d-4078-b9a9-353f4c8063a4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9rk82" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.201122 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/265037d4-b03f-4d1b-a643-5b5eb4e59738-default-certificate\") pod \"router-default-5444994796-xmwb5\" (UID: \"265037d4-b03f-4d1b-a643-5b5eb4e59738\") " pod="openshift-ingress/router-default-5444994796-xmwb5" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.201188 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrgmn\" (UniqueName: \"kubernetes.io/projected/be303c23-fd28-48b4-9463-65c5167285fe-kube-api-access-nrgmn\") pod \"package-server-manager-789f6589d5-qbqzn\" (UID: \"be303c23-fd28-48b4-9463-65c5167285fe\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbqzn" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.201211 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/56120454-e6eb-4d76-92ee-b8083c027c12-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gz5pm\" (UID: \"56120454-e6eb-4d76-92ee-b8083c027c12\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gz5pm" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.201235 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48x2f\" (UniqueName: \"kubernetes.io/projected/17db82f9-9c6e-492e-8163-958debfe5437-kube-api-access-48x2f\") pod \"catalog-operator-68c6474976-25n7p\" (UID: \"17db82f9-9c6e-492e-8163-958debfe5437\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25n7p" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.201256 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b77tc\" (UniqueName: \"kubernetes.io/projected/88d80bbe-a9ab-4c91-b0b2-485e106dd150-kube-api-access-b77tc\") pod \"marketplace-operator-79b997595-9wlmb\" (UID: \"88d80bbe-a9ab-4c91-b0b2-485e106dd150\") " pod="openshift-marketplace/marketplace-operator-79b997595-9wlmb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.202033 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m94zd\" (UniqueName: \"kubernetes.io/projected/3a545142-a29d-4b5c-8c5f-33bf1fc1857f-kube-api-access-m94zd\") pod \"service-ca-9c57cc56f-kbbjg\" (UID: \"3a545142-a29d-4b5c-8c5f-33bf1fc1857f\") " pod="openshift-service-ca/service-ca-9c57cc56f-kbbjg" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.202085 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ec880c8-61ec-4f66-976e-b656d28308a9-config\") pod \"kube-apiserver-operator-766d6c64bb-7v6dg\" (UID: \"4ec880c8-61ec-4f66-976e-b656d28308a9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7v6dg" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.202117 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c1b90bc-688a-4b71-bb7b-a41d80d75f7b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-z922x\" (UID: \"0c1b90bc-688a-4b71-bb7b-a41d80d75f7b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z922x" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.202143 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/88d80bbe-a9ab-4c91-b0b2-485e106dd150-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9wlmb\" (UID: \"88d80bbe-a9ab-4c91-b0b2-485e106dd150\") " pod="openshift-marketplace/marketplace-operator-79b997595-9wlmb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.202661 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7fe8224-0ecc-4289-9a5d-107f9d6d00b5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lkfnf\" (UID: \"c7fe8224-0ecc-4289-9a5d-107f9d6d00b5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lkfnf" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.202693 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e1b5e59-64b6-4fe6-ad6f-fa1b521be5e6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nqgsm\" (UID: \"8e1b5e59-64b6-4fe6-ad6f-fa1b521be5e6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgsm" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.202720 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/27f4e33e-8aae-44e8-918c-f25bf7eb9e37-etcd-service-ca\") pod \"etcd-operator-b45778765-shthb\" (UID: \"27f4e33e-8aae-44e8-918c-f25bf7eb9e37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-shthb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.202742 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7fe8224-0ecc-4289-9a5d-107f9d6d00b5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lkfnf\" (UID: \"c7fe8224-0ecc-4289-9a5d-107f9d6d00b5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lkfnf" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.202765 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jssp\" (UniqueName: \"kubernetes.io/projected/38480cb6-15a9-450a-9efd-71a7d346ef7c-kube-api-access-4jssp\") pod \"control-plane-machine-set-operator-78cbb6b69f-rn6lb\" (UID: \"38480cb6-15a9-450a-9efd-71a7d346ef7c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rn6lb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.202798 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3a545142-a29d-4b5c-8c5f-33bf1fc1857f-signing-cabundle\") pod \"service-ca-9c57cc56f-kbbjg\" (UID: \"3a545142-a29d-4b5c-8c5f-33bf1fc1857f\") " pod="openshift-service-ca/service-ca-9c57cc56f-kbbjg" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.202819 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e1b5e59-64b6-4fe6-ad6f-fa1b521be5e6-config\") pod \"kube-controller-manager-operator-78b949d7b-nqgsm\" (UID: \"8e1b5e59-64b6-4fe6-ad6f-fa1b521be5e6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgsm" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.202839 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e1b5e59-64b6-4fe6-ad6f-fa1b521be5e6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nqgsm\" (UID: \"8e1b5e59-64b6-4fe6-ad6f-fa1b521be5e6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgsm" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.202877 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsjgq\" (UniqueName: \"kubernetes.io/projected/27f4e33e-8aae-44e8-918c-f25bf7eb9e37-kube-api-access-lsjgq\") pod \"etcd-operator-b45778765-shthb\" (UID: \"27f4e33e-8aae-44e8-918c-f25bf7eb9e37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-shthb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.202922 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/abad9838-aab6-4f7c-a042-83924f9e0809-socket-dir\") pod \"csi-hostpathplugin-d7pkc\" (UID: \"abad9838-aab6-4f7c-a042-83924f9e0809\") " pod="hostpath-provisioner/csi-hostpathplugin-d7pkc" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.203008 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c541a72-1484-41a6-b251-748306e1068d-config-volume\") pod \"dns-default-tzqjc\" (UID: \"5c541a72-1484-41a6-b251-748306e1068d\") " pod="openshift-dns/dns-default-tzqjc" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.203034 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30c2b6ef-98b7-4411-a10b-a83925ed5ed0-config\") pod \"service-ca-operator-777779d784-pks9r\" (UID: \"30c2b6ef-98b7-4411-a10b-a83925ed5ed0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pks9r" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.203054 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a410bee-f5f8-4ab2-9750-3afed0e312b2-trusted-ca\") pod \"ingress-operator-5b745b69d9-dpmvq\" (UID: \"1a410bee-f5f8-4ab2-9750-3afed0e312b2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dpmvq" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.203113 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hrh7\" (UniqueName: \"kubernetes.io/projected/d5c163ba-fbd3-49b8-8b2f-ad0de7210e6f-kube-api-access-4hrh7\") pod \"multus-admission-controller-857f4d67dd-8s58v\" (UID: \"d5c163ba-fbd3-49b8-8b2f-ad0de7210e6f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8s58v" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.203197 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6hgp\" (UniqueName: \"kubernetes.io/projected/56120454-e6eb-4d76-92ee-b8083c027c12-kube-api-access-w6hgp\") pod \"machine-config-operator-74547568cd-gz5pm\" (UID: \"56120454-e6eb-4d76-92ee-b8083c027c12\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gz5pm" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.203240 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ec880c8-61ec-4f66-976e-b656d28308a9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-7v6dg\" (UID: \"4ec880c8-61ec-4f66-976e-b656d28308a9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7v6dg" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.203262 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24gv6\" (UniqueName: \"kubernetes.io/projected/cca4e9b1-a2a3-4d9b-a73c-e6d7d4552e97-kube-api-access-24gv6\") pod \"machine-config-server-zjvxn\" (UID: \"cca4e9b1-a2a3-4d9b-a73c-e6d7d4552e97\") " pod="openshift-machine-config-operator/machine-config-server-zjvxn" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.203284 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/17db82f9-9c6e-492e-8163-958debfe5437-profile-collector-cert\") pod \"catalog-operator-68c6474976-25n7p\" (UID: \"17db82f9-9c6e-492e-8163-958debfe5437\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25n7p" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.203351 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8ea56da0-e5ef-406e-bff9-b8652666745a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-69gng\" (UID: \"8ea56da0-e5ef-406e-bff9-b8652666745a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-69gng" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.203411 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-registry-tls\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.203447 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d09bb78-b81a-4558-8433-192e7fc846df-webhook-cert\") pod \"packageserver-d55dfcdfc-km6qw\" (UID: \"3d09bb78-b81a-4558-8433-192e7fc846df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-km6qw" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.203473 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/38480cb6-15a9-450a-9efd-71a7d346ef7c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rn6lb\" (UID: \"38480cb6-15a9-450a-9efd-71a7d346ef7c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rn6lb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.203496 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/88d80bbe-a9ab-4c91-b0b2-485e106dd150-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9wlmb\" (UID: \"88d80bbe-a9ab-4c91-b0b2-485e106dd150\") " pod="openshift-marketplace/marketplace-operator-79b997595-9wlmb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.203518 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r2qf\" (UniqueName: \"kubernetes.io/projected/52f0638b-0791-4c8a-b86e-8d6ad95a9c05-kube-api-access-7r2qf\") pod \"ingress-canary-2jzgr\" (UID: \"52f0638b-0791-4c8a-b86e-8d6ad95a9c05\") " pod="openshift-ingress-canary/ingress-canary-2jzgr" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.203573 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30c2b6ef-98b7-4411-a10b-a83925ed5ed0-serving-cert\") pod \"service-ca-operator-777779d784-pks9r\" (UID: \"30c2b6ef-98b7-4411-a10b-a83925ed5ed0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pks9r" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.204503 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qflwn\" (UniqueName: \"kubernetes.io/projected/30c2b6ef-98b7-4411-a10b-a83925ed5ed0-kube-api-access-qflwn\") pod \"service-ca-operator-777779d784-pks9r\" (UID: \"30c2b6ef-98b7-4411-a10b-a83925ed5ed0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pks9r" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.204536 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c1b90bc-688a-4b71-bb7b-a41d80d75f7b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-z922x\" (UID: \"0c1b90bc-688a-4b71-bb7b-a41d80d75f7b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z922x" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.204581 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm47v\" (UniqueName: \"kubernetes.io/projected/117123b0-4673-4a20-8e0c-7bca235e3168-kube-api-access-lm47v\") pod \"migrator-59844c95c7-tcsl7\" (UID: \"117123b0-4673-4a20-8e0c-7bca235e3168\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tcsl7" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.204647 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ec880c8-61ec-4f66-976e-b656d28308a9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-7v6dg\" (UID: \"4ec880c8-61ec-4f66-976e-b656d28308a9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7v6dg" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.204730 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/56120454-e6eb-4d76-92ee-b8083c027c12-images\") pod \"machine-config-operator-74547568cd-gz5pm\" (UID: \"56120454-e6eb-4d76-92ee-b8083c027c12\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gz5pm" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.204781 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5dca2df7-7c4c-40bb-8302-d6c089fd5486-secret-volume\") pod \"collect-profiles-29422815-gckmk\" (UID: \"5dca2df7-7c4c-40bb-8302-d6c089fd5486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422815-gckmk" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.204808 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vx8r\" (UniqueName: \"kubernetes.io/projected/8ea56da0-e5ef-406e-bff9-b8652666745a-kube-api-access-6vx8r\") pod \"olm-operator-6b444d44fb-69gng\" (UID: \"8ea56da0-e5ef-406e-bff9-b8652666745a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-69gng" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.206934 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52f0638b-0791-4c8a-b86e-8d6ad95a9c05-cert\") pod \"ingress-canary-2jzgr\" (UID: \"52f0638b-0791-4c8a-b86e-8d6ad95a9c05\") " pod="openshift-ingress-canary/ingress-canary-2jzgr" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.206963 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3d09bb78-b81a-4558-8433-192e7fc846df-tmpfs\") pod \"packageserver-d55dfcdfc-km6qw\" (UID: \"3d09bb78-b81a-4558-8433-192e7fc846df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-km6qw" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.207254 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/27f4e33e-8aae-44e8-918c-f25bf7eb9e37-etcd-service-ca\") pod \"etcd-operator-b45778765-shthb\" (UID: \"27f4e33e-8aae-44e8-918c-f25bf7eb9e37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-shthb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.207638 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/abad9838-aab6-4f7c-a042-83924f9e0809-mountpoint-dir\") pod \"csi-hostpathplugin-d7pkc\" (UID: \"abad9838-aab6-4f7c-a042-83924f9e0809\") " pod="hostpath-provisioner/csi-hostpathplugin-d7pkc" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.207678 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/27f4e33e-8aae-44e8-918c-f25bf7eb9e37-etcd-client\") pod \"etcd-operator-b45778765-shthb\" (UID: \"27f4e33e-8aae-44e8-918c-f25bf7eb9e37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-shthb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.207679 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5c163ba-fbd3-49b8-8b2f-ad0de7210e6f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-8s58v\" (UID: \"d5c163ba-fbd3-49b8-8b2f-ad0de7210e6f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8s58v" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.207798 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d09bb78-b81a-4558-8433-192e7fc846df-apiservice-cert\") pod \"packageserver-d55dfcdfc-km6qw\" (UID: \"3d09bb78-b81a-4558-8433-192e7fc846df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-km6qw" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.207921 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smsvf\" (UniqueName: \"kubernetes.io/projected/0c1b90bc-688a-4b71-bb7b-a41d80d75f7b-kube-api-access-smsvf\") pod \"kube-storage-version-migrator-operator-b67b599dd-z922x\" (UID: \"0c1b90bc-688a-4b71-bb7b-a41d80d75f7b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z922x" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.208019 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.208273 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.208298 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27f4e33e-8aae-44e8-918c-f25bf7eb9e37-config\") pod \"etcd-operator-b45778765-shthb\" (UID: \"27f4e33e-8aae-44e8-918c-f25bf7eb9e37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-shthb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.208442 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/17db82f9-9c6e-492e-8163-958debfe5437-srv-cert\") pod \"catalog-operator-68c6474976-25n7p\" (UID: \"17db82f9-9c6e-492e-8163-958debfe5437\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25n7p" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.208604 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz4nm" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.208668 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-registry-certificates\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.208730 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.208750 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-574vq\" (UniqueName: \"kubernetes.io/projected/abad9838-aab6-4f7c-a042-83924f9e0809-kube-api-access-574vq\") pod \"csi-hostpathplugin-d7pkc\" (UID: \"abad9838-aab6-4f7c-a042-83924f9e0809\") " pod="hostpath-provisioner/csi-hostpathplugin-d7pkc" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.208769 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/be303c23-fd28-48b4-9463-65c5167285fe-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qbqzn\" (UID: \"be303c23-fd28-48b4-9463-65c5167285fe\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbqzn" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.208811 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv8w7\" (UniqueName: \"kubernetes.io/projected/3d09bb78-b81a-4558-8433-192e7fc846df-kube-api-access-nv8w7\") pod \"packageserver-d55dfcdfc-km6qw\" (UID: \"3d09bb78-b81a-4558-8433-192e7fc846df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-km6qw" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.208829 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/27f4e33e-8aae-44e8-918c-f25bf7eb9e37-etcd-ca\") pod \"etcd-operator-b45778765-shthb\" (UID: \"27f4e33e-8aae-44e8-918c-f25bf7eb9e37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-shthb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.208844 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/abad9838-aab6-4f7c-a042-83924f9e0809-csi-data-dir\") pod \"csi-hostpathplugin-d7pkc\" (UID: \"abad9838-aab6-4f7c-a042-83924f9e0809\") " pod="hostpath-provisioner/csi-hostpathplugin-d7pkc" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.208871 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3a545142-a29d-4b5c-8c5f-33bf1fc1857f-signing-key\") pod \"service-ca-9c57cc56f-kbbjg\" (UID: \"3a545142-a29d-4b5c-8c5f-33bf1fc1857f\") " pod="openshift-service-ca/service-ca-9c57cc56f-kbbjg" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.208887 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mh4h\" (UniqueName: \"kubernetes.io/projected/5c541a72-1484-41a6-b251-748306e1068d-kube-api-access-8mh4h\") pod \"dns-default-tzqjc\" (UID: \"5c541a72-1484-41a6-b251-748306e1068d\") " pod="openshift-dns/dns-default-tzqjc" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.208903 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7a4e09c2-7d4d-4078-b9a9-353f4c8063a4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9rk82\" (UID: \"7a4e09c2-7d4d-4078-b9a9-353f4c8063a4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9rk82" Dec 10 12:17:57 crc kubenswrapper[4689]: E1210 12:17:57.209848 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:17:57.709834943 +0000 UTC m=+145.497916081 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.210018 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-bound-sa-token\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.210541 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/27f4e33e-8aae-44e8-918c-f25bf7eb9e37-etcd-ca\") pod \"etcd-operator-b45778765-shthb\" (UID: \"27f4e33e-8aae-44e8-918c-f25bf7eb9e37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-shthb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.212432 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27f4e33e-8aae-44e8-918c-f25bf7eb9e37-serving-cert\") pod \"etcd-operator-b45778765-shthb\" (UID: \"27f4e33e-8aae-44e8-918c-f25bf7eb9e37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-shthb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.210089 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/abad9838-aab6-4f7c-a042-83924f9e0809-registration-dir\") pod \"csi-hostpathplugin-d7pkc\" (UID: \"abad9838-aab6-4f7c-a042-83924f9e0809\") " pod="hostpath-provisioner/csi-hostpathplugin-d7pkc" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.216614 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/265037d4-b03f-4d1b-a643-5b5eb4e59738-service-ca-bundle\") pod \"router-default-5444994796-xmwb5\" (UID: \"265037d4-b03f-4d1b-a643-5b5eb4e59738\") " pod="openshift-ingress/router-default-5444994796-xmwb5" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.216633 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfg7x\" (UniqueName: \"kubernetes.io/projected/7a4e09c2-7d4d-4078-b9a9-353f4c8063a4-kube-api-access-jfg7x\") pod \"machine-config-controller-84d6567774-9rk82\" (UID: \"7a4e09c2-7d4d-4078-b9a9-353f4c8063a4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9rk82" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.216658 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-trusted-ca\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.216674 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/cca4e9b1-a2a3-4d9b-a73c-e6d7d4552e97-certs\") pod \"machine-config-server-zjvxn\" (UID: \"cca4e9b1-a2a3-4d9b-a73c-e6d7d4552e97\") " pod="openshift-machine-config-operator/machine-config-server-zjvxn" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.220964 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.221442 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-trusted-ca\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.221575 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27f4e33e-8aae-44e8-918c-f25bf7eb9e37-config\") pod \"etcd-operator-b45778765-shthb\" (UID: \"27f4e33e-8aae-44e8-918c-f25bf7eb9e37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-shthb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.222223 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-registry-certificates\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.222440 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-htx2z" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.234859 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.237475 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-registry-tls\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.243729 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxtnw\" (UniqueName: \"kubernetes.io/projected/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-kube-api-access-pxtnw\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.260528 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsjgq\" (UniqueName: \"kubernetes.io/projected/27f4e33e-8aae-44e8-918c-f25bf7eb9e37-kube-api-access-lsjgq\") pod \"etcd-operator-b45778765-shthb\" (UID: \"27f4e33e-8aae-44e8-918c-f25bf7eb9e37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-shthb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.266777 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rn6ml" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.281779 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-bound-sa-token\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.313366 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sfzm5" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.317451 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.317652 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r2qf\" (UniqueName: \"kubernetes.io/projected/52f0638b-0791-4c8a-b86e-8d6ad95a9c05-kube-api-access-7r2qf\") pod \"ingress-canary-2jzgr\" (UID: \"52f0638b-0791-4c8a-b86e-8d6ad95a9c05\") " pod="openshift-ingress-canary/ingress-canary-2jzgr" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.317678 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30c2b6ef-98b7-4411-a10b-a83925ed5ed0-serving-cert\") pod \"service-ca-operator-777779d784-pks9r\" (UID: \"30c2b6ef-98b7-4411-a10b-a83925ed5ed0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pks9r" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.317698 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/88d80bbe-a9ab-4c91-b0b2-485e106dd150-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9wlmb\" (UID: \"88d80bbe-a9ab-4c91-b0b2-485e106dd150\") " pod="openshift-marketplace/marketplace-operator-79b997595-9wlmb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.317715 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c1b90bc-688a-4b71-bb7b-a41d80d75f7b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-z922x\" (UID: \"0c1b90bc-688a-4b71-bb7b-a41d80d75f7b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z922x" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.317731 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qflwn\" (UniqueName: \"kubernetes.io/projected/30c2b6ef-98b7-4411-a10b-a83925ed5ed0-kube-api-access-qflwn\") pod \"service-ca-operator-777779d784-pks9r\" (UID: \"30c2b6ef-98b7-4411-a10b-a83925ed5ed0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pks9r" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.317754 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm47v\" (UniqueName: \"kubernetes.io/projected/117123b0-4673-4a20-8e0c-7bca235e3168-kube-api-access-lm47v\") pod \"migrator-59844c95c7-tcsl7\" (UID: \"117123b0-4673-4a20-8e0c-7bca235e3168\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tcsl7" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.317772 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ec880c8-61ec-4f66-976e-b656d28308a9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-7v6dg\" (UID: \"4ec880c8-61ec-4f66-976e-b656d28308a9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7v6dg" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.317792 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/56120454-e6eb-4d76-92ee-b8083c027c12-images\") pod \"machine-config-operator-74547568cd-gz5pm\" (UID: \"56120454-e6eb-4d76-92ee-b8083c027c12\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gz5pm" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.317808 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vx8r\" (UniqueName: \"kubernetes.io/projected/8ea56da0-e5ef-406e-bff9-b8652666745a-kube-api-access-6vx8r\") pod \"olm-operator-6b444d44fb-69gng\" (UID: \"8ea56da0-e5ef-406e-bff9-b8652666745a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-69gng" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.317824 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5dca2df7-7c4c-40bb-8302-d6c089fd5486-secret-volume\") pod \"collect-profiles-29422815-gckmk\" (UID: \"5dca2df7-7c4c-40bb-8302-d6c089fd5486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422815-gckmk" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.317839 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52f0638b-0791-4c8a-b86e-8d6ad95a9c05-cert\") pod \"ingress-canary-2jzgr\" (UID: \"52f0638b-0791-4c8a-b86e-8d6ad95a9c05\") " pod="openshift-ingress-canary/ingress-canary-2jzgr" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.317857 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3d09bb78-b81a-4558-8433-192e7fc846df-tmpfs\") pod \"packageserver-d55dfcdfc-km6qw\" (UID: \"3d09bb78-b81a-4558-8433-192e7fc846df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-km6qw" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.317881 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/abad9838-aab6-4f7c-a042-83924f9e0809-mountpoint-dir\") pod \"csi-hostpathplugin-d7pkc\" (UID: \"abad9838-aab6-4f7c-a042-83924f9e0809\") " pod="hostpath-provisioner/csi-hostpathplugin-d7pkc" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.317896 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5c163ba-fbd3-49b8-8b2f-ad0de7210e6f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-8s58v\" (UID: \"d5c163ba-fbd3-49b8-8b2f-ad0de7210e6f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8s58v" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.317910 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d09bb78-b81a-4558-8433-192e7fc846df-apiservice-cert\") pod \"packageserver-d55dfcdfc-km6qw\" (UID: \"3d09bb78-b81a-4558-8433-192e7fc846df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-km6qw" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.317925 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smsvf\" (UniqueName: \"kubernetes.io/projected/0c1b90bc-688a-4b71-bb7b-a41d80d75f7b-kube-api-access-smsvf\") pod \"kube-storage-version-migrator-operator-b67b599dd-z922x\" (UID: \"0c1b90bc-688a-4b71-bb7b-a41d80d75f7b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z922x" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.317950 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/17db82f9-9c6e-492e-8163-958debfe5437-srv-cert\") pod \"catalog-operator-68c6474976-25n7p\" (UID: \"17db82f9-9c6e-492e-8163-958debfe5437\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25n7p" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.317980 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-574vq\" (UniqueName: \"kubernetes.io/projected/abad9838-aab6-4f7c-a042-83924f9e0809-kube-api-access-574vq\") pod \"csi-hostpathplugin-d7pkc\" (UID: \"abad9838-aab6-4f7c-a042-83924f9e0809\") " pod="hostpath-provisioner/csi-hostpathplugin-d7pkc" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318002 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/be303c23-fd28-48b4-9463-65c5167285fe-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qbqzn\" (UID: \"be303c23-fd28-48b4-9463-65c5167285fe\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbqzn" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318021 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv8w7\" (UniqueName: \"kubernetes.io/projected/3d09bb78-b81a-4558-8433-192e7fc846df-kube-api-access-nv8w7\") pod \"packageserver-d55dfcdfc-km6qw\" (UID: \"3d09bb78-b81a-4558-8433-192e7fc846df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-km6qw" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318038 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/abad9838-aab6-4f7c-a042-83924f9e0809-csi-data-dir\") pod \"csi-hostpathplugin-d7pkc\" (UID: \"abad9838-aab6-4f7c-a042-83924f9e0809\") " pod="hostpath-provisioner/csi-hostpathplugin-d7pkc" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318060 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mh4h\" (UniqueName: \"kubernetes.io/projected/5c541a72-1484-41a6-b251-748306e1068d-kube-api-access-8mh4h\") pod \"dns-default-tzqjc\" (UID: \"5c541a72-1484-41a6-b251-748306e1068d\") " pod="openshift-dns/dns-default-tzqjc" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318078 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7a4e09c2-7d4d-4078-b9a9-353f4c8063a4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9rk82\" (UID: \"7a4e09c2-7d4d-4078-b9a9-353f4c8063a4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9rk82" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318095 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3a545142-a29d-4b5c-8c5f-33bf1fc1857f-signing-key\") pod \"service-ca-9c57cc56f-kbbjg\" (UID: \"3a545142-a29d-4b5c-8c5f-33bf1fc1857f\") " pod="openshift-service-ca/service-ca-9c57cc56f-kbbjg" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318111 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/abad9838-aab6-4f7c-a042-83924f9e0809-registration-dir\") pod \"csi-hostpathplugin-d7pkc\" (UID: \"abad9838-aab6-4f7c-a042-83924f9e0809\") " pod="hostpath-provisioner/csi-hostpathplugin-d7pkc" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318126 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/265037d4-b03f-4d1b-a643-5b5eb4e59738-service-ca-bundle\") pod \"router-default-5444994796-xmwb5\" (UID: \"265037d4-b03f-4d1b-a643-5b5eb4e59738\") " pod="openshift-ingress/router-default-5444994796-xmwb5" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318142 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfg7x\" (UniqueName: \"kubernetes.io/projected/7a4e09c2-7d4d-4078-b9a9-353f4c8063a4-kube-api-access-jfg7x\") pod \"machine-config-controller-84d6567774-9rk82\" (UID: \"7a4e09c2-7d4d-4078-b9a9-353f4c8063a4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9rk82" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318158 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/cca4e9b1-a2a3-4d9b-a73c-e6d7d4552e97-certs\") pod \"machine-config-server-zjvxn\" (UID: \"cca4e9b1-a2a3-4d9b-a73c-e6d7d4552e97\") " pod="openshift-machine-config-operator/machine-config-server-zjvxn" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318175 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/abad9838-aab6-4f7c-a042-83924f9e0809-plugins-dir\") pod \"csi-hostpathplugin-d7pkc\" (UID: \"abad9838-aab6-4f7c-a042-83924f9e0809\") " pod="hostpath-provisioner/csi-hostpathplugin-d7pkc" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318191 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/265037d4-b03f-4d1b-a643-5b5eb4e59738-stats-auth\") pod \"router-default-5444994796-xmwb5\" (UID: \"265037d4-b03f-4d1b-a643-5b5eb4e59738\") " pod="openshift-ingress/router-default-5444994796-xmwb5" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318206 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/cca4e9b1-a2a3-4d9b-a73c-e6d7d4552e97-node-bootstrap-token\") pod \"machine-config-server-zjvxn\" (UID: \"cca4e9b1-a2a3-4d9b-a73c-e6d7d4552e97\") " pod="openshift-machine-config-operator/machine-config-server-zjvxn" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318223 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8ea56da0-e5ef-406e-bff9-b8652666745a-srv-cert\") pod \"olm-operator-6b444d44fb-69gng\" (UID: \"8ea56da0-e5ef-406e-bff9-b8652666745a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-69gng" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318239 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1a410bee-f5f8-4ab2-9750-3afed0e312b2-metrics-tls\") pod \"ingress-operator-5b745b69d9-dpmvq\" (UID: \"1a410bee-f5f8-4ab2-9750-3afed0e312b2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dpmvq" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318254 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79v8n\" (UniqueName: \"kubernetes.io/projected/1a410bee-f5f8-4ab2-9750-3afed0e312b2-kube-api-access-79v8n\") pod \"ingress-operator-5b745b69d9-dpmvq\" (UID: \"1a410bee-f5f8-4ab2-9750-3afed0e312b2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dpmvq" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318271 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c541a72-1484-41a6-b251-748306e1068d-metrics-tls\") pod \"dns-default-tzqjc\" (UID: \"5c541a72-1484-41a6-b251-748306e1068d\") " pod="openshift-dns/dns-default-tzqjc" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318293 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/56120454-e6eb-4d76-92ee-b8083c027c12-proxy-tls\") pod \"machine-config-operator-74547568cd-gz5pm\" (UID: \"56120454-e6eb-4d76-92ee-b8083c027c12\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gz5pm" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318308 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7fe8224-0ecc-4289-9a5d-107f9d6d00b5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lkfnf\" (UID: \"c7fe8224-0ecc-4289-9a5d-107f9d6d00b5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lkfnf" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318324 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5dca2df7-7c4c-40bb-8302-d6c089fd5486-config-volume\") pod \"collect-profiles-29422815-gckmk\" (UID: \"5dca2df7-7c4c-40bb-8302-d6c089fd5486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422815-gckmk" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318338 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1a410bee-f5f8-4ab2-9750-3afed0e312b2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-dpmvq\" (UID: \"1a410bee-f5f8-4ab2-9750-3afed0e312b2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dpmvq" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318354 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e9883663-c0ff-4454-ba8f-caed05881365-metrics-tls\") pod \"dns-operator-744455d44c-qc8tj\" (UID: \"e9883663-c0ff-4454-ba8f-caed05881365\") " pod="openshift-dns-operator/dns-operator-744455d44c-qc8tj" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318371 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czsbh\" (UniqueName: \"kubernetes.io/projected/5dca2df7-7c4c-40bb-8302-d6c089fd5486-kube-api-access-czsbh\") pod \"collect-profiles-29422815-gckmk\" (UID: \"5dca2df7-7c4c-40bb-8302-d6c089fd5486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422815-gckmk" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318386 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvc9n\" (UniqueName: \"kubernetes.io/projected/e9883663-c0ff-4454-ba8f-caed05881365-kube-api-access-pvc9n\") pod \"dns-operator-744455d44c-qc8tj\" (UID: \"e9883663-c0ff-4454-ba8f-caed05881365\") " pod="openshift-dns-operator/dns-operator-744455d44c-qc8tj" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318401 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/265037d4-b03f-4d1b-a643-5b5eb4e59738-metrics-certs\") pod \"router-default-5444994796-xmwb5\" (UID: \"265037d4-b03f-4d1b-a643-5b5eb4e59738\") " pod="openshift-ingress/router-default-5444994796-xmwb5" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318415 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txwzz\" (UniqueName: \"kubernetes.io/projected/265037d4-b03f-4d1b-a643-5b5eb4e59738-kube-api-access-txwzz\") pod \"router-default-5444994796-xmwb5\" (UID: \"265037d4-b03f-4d1b-a643-5b5eb4e59738\") " pod="openshift-ingress/router-default-5444994796-xmwb5" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318437 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7a4e09c2-7d4d-4078-b9a9-353f4c8063a4-proxy-tls\") pod \"machine-config-controller-84d6567774-9rk82\" (UID: \"7a4e09c2-7d4d-4078-b9a9-353f4c8063a4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9rk82" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318452 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/265037d4-b03f-4d1b-a643-5b5eb4e59738-default-certificate\") pod \"router-default-5444994796-xmwb5\" (UID: \"265037d4-b03f-4d1b-a643-5b5eb4e59738\") " pod="openshift-ingress/router-default-5444994796-xmwb5" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318467 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/56120454-e6eb-4d76-92ee-b8083c027c12-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gz5pm\" (UID: \"56120454-e6eb-4d76-92ee-b8083c027c12\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gz5pm" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318485 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48x2f\" (UniqueName: \"kubernetes.io/projected/17db82f9-9c6e-492e-8163-958debfe5437-kube-api-access-48x2f\") pod \"catalog-operator-68c6474976-25n7p\" (UID: \"17db82f9-9c6e-492e-8163-958debfe5437\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25n7p" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318500 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b77tc\" (UniqueName: \"kubernetes.io/projected/88d80bbe-a9ab-4c91-b0b2-485e106dd150-kube-api-access-b77tc\") pod \"marketplace-operator-79b997595-9wlmb\" (UID: \"88d80bbe-a9ab-4c91-b0b2-485e106dd150\") " pod="openshift-marketplace/marketplace-operator-79b997595-9wlmb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318516 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrgmn\" (UniqueName: \"kubernetes.io/projected/be303c23-fd28-48b4-9463-65c5167285fe-kube-api-access-nrgmn\") pod \"package-server-manager-789f6589d5-qbqzn\" (UID: \"be303c23-fd28-48b4-9463-65c5167285fe\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbqzn" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318532 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ec880c8-61ec-4f66-976e-b656d28308a9-config\") pod \"kube-apiserver-operator-766d6c64bb-7v6dg\" (UID: \"4ec880c8-61ec-4f66-976e-b656d28308a9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7v6dg" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318547 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m94zd\" (UniqueName: \"kubernetes.io/projected/3a545142-a29d-4b5c-8c5f-33bf1fc1857f-kube-api-access-m94zd\") pod \"service-ca-9c57cc56f-kbbjg\" (UID: \"3a545142-a29d-4b5c-8c5f-33bf1fc1857f\") " pod="openshift-service-ca/service-ca-9c57cc56f-kbbjg" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318563 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/88d80bbe-a9ab-4c91-b0b2-485e106dd150-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9wlmb\" (UID: \"88d80bbe-a9ab-4c91-b0b2-485e106dd150\") " pod="openshift-marketplace/marketplace-operator-79b997595-9wlmb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318578 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c1b90bc-688a-4b71-bb7b-a41d80d75f7b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-z922x\" (UID: \"0c1b90bc-688a-4b71-bb7b-a41d80d75f7b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z922x" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318594 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e1b5e59-64b6-4fe6-ad6f-fa1b521be5e6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nqgsm\" (UID: \"8e1b5e59-64b6-4fe6-ad6f-fa1b521be5e6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgsm" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318608 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7fe8224-0ecc-4289-9a5d-107f9d6d00b5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lkfnf\" (UID: \"c7fe8224-0ecc-4289-9a5d-107f9d6d00b5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lkfnf" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318623 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7fe8224-0ecc-4289-9a5d-107f9d6d00b5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lkfnf\" (UID: \"c7fe8224-0ecc-4289-9a5d-107f9d6d00b5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lkfnf" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318639 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jssp\" (UniqueName: \"kubernetes.io/projected/38480cb6-15a9-450a-9efd-71a7d346ef7c-kube-api-access-4jssp\") pod \"control-plane-machine-set-operator-78cbb6b69f-rn6lb\" (UID: \"38480cb6-15a9-450a-9efd-71a7d346ef7c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rn6lb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318654 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3a545142-a29d-4b5c-8c5f-33bf1fc1857f-signing-cabundle\") pod \"service-ca-9c57cc56f-kbbjg\" (UID: \"3a545142-a29d-4b5c-8c5f-33bf1fc1857f\") " pod="openshift-service-ca/service-ca-9c57cc56f-kbbjg" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318668 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e1b5e59-64b6-4fe6-ad6f-fa1b521be5e6-config\") pod \"kube-controller-manager-operator-78b949d7b-nqgsm\" (UID: \"8e1b5e59-64b6-4fe6-ad6f-fa1b521be5e6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgsm" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318682 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e1b5e59-64b6-4fe6-ad6f-fa1b521be5e6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nqgsm\" (UID: \"8e1b5e59-64b6-4fe6-ad6f-fa1b521be5e6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgsm" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318708 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/abad9838-aab6-4f7c-a042-83924f9e0809-socket-dir\") pod \"csi-hostpathplugin-d7pkc\" (UID: \"abad9838-aab6-4f7c-a042-83924f9e0809\") " pod="hostpath-provisioner/csi-hostpathplugin-d7pkc" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318724 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c541a72-1484-41a6-b251-748306e1068d-config-volume\") pod \"dns-default-tzqjc\" (UID: \"5c541a72-1484-41a6-b251-748306e1068d\") " pod="openshift-dns/dns-default-tzqjc" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318742 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30c2b6ef-98b7-4411-a10b-a83925ed5ed0-config\") pod \"service-ca-operator-777779d784-pks9r\" (UID: \"30c2b6ef-98b7-4411-a10b-a83925ed5ed0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pks9r" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318758 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a410bee-f5f8-4ab2-9750-3afed0e312b2-trusted-ca\") pod \"ingress-operator-5b745b69d9-dpmvq\" (UID: \"1a410bee-f5f8-4ab2-9750-3afed0e312b2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dpmvq" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318774 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hrh7\" (UniqueName: \"kubernetes.io/projected/d5c163ba-fbd3-49b8-8b2f-ad0de7210e6f-kube-api-access-4hrh7\") pod \"multus-admission-controller-857f4d67dd-8s58v\" (UID: \"d5c163ba-fbd3-49b8-8b2f-ad0de7210e6f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8s58v" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318801 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6hgp\" (UniqueName: \"kubernetes.io/projected/56120454-e6eb-4d76-92ee-b8083c027c12-kube-api-access-w6hgp\") pod \"machine-config-operator-74547568cd-gz5pm\" (UID: \"56120454-e6eb-4d76-92ee-b8083c027c12\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gz5pm" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318818 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ec880c8-61ec-4f66-976e-b656d28308a9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-7v6dg\" (UID: \"4ec880c8-61ec-4f66-976e-b656d28308a9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7v6dg" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318833 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/17db82f9-9c6e-492e-8163-958debfe5437-profile-collector-cert\") pod \"catalog-operator-68c6474976-25n7p\" (UID: \"17db82f9-9c6e-492e-8163-958debfe5437\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25n7p" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318849 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24gv6\" (UniqueName: \"kubernetes.io/projected/cca4e9b1-a2a3-4d9b-a73c-e6d7d4552e97-kube-api-access-24gv6\") pod \"machine-config-server-zjvxn\" (UID: \"cca4e9b1-a2a3-4d9b-a73c-e6d7d4552e97\") " pod="openshift-machine-config-operator/machine-config-server-zjvxn" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318865 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8ea56da0-e5ef-406e-bff9-b8652666745a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-69gng\" (UID: \"8ea56da0-e5ef-406e-bff9-b8652666745a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-69gng" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318882 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d09bb78-b81a-4558-8433-192e7fc846df-webhook-cert\") pod \"packageserver-d55dfcdfc-km6qw\" (UID: \"3d09bb78-b81a-4558-8433-192e7fc846df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-km6qw" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.318898 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/38480cb6-15a9-450a-9efd-71a7d346ef7c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rn6lb\" (UID: \"38480cb6-15a9-450a-9efd-71a7d346ef7c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rn6lb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.320184 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/abad9838-aab6-4f7c-a042-83924f9e0809-csi-data-dir\") pod \"csi-hostpathplugin-d7pkc\" (UID: \"abad9838-aab6-4f7c-a042-83924f9e0809\") " pod="hostpath-provisioner/csi-hostpathplugin-d7pkc" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.320755 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/265037d4-b03f-4d1b-a643-5b5eb4e59738-service-ca-bundle\") pod \"router-default-5444994796-xmwb5\" (UID: \"265037d4-b03f-4d1b-a643-5b5eb4e59738\") " pod="openshift-ingress/router-default-5444994796-xmwb5" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.321126 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/abad9838-aab6-4f7c-a042-83924f9e0809-registration-dir\") pod \"csi-hostpathplugin-d7pkc\" (UID: \"abad9838-aab6-4f7c-a042-83924f9e0809\") " pod="hostpath-provisioner/csi-hostpathplugin-d7pkc" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.321907 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c1b90bc-688a-4b71-bb7b-a41d80d75f7b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-z922x\" (UID: \"0c1b90bc-688a-4b71-bb7b-a41d80d75f7b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z922x" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.322218 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/17db82f9-9c6e-492e-8163-958debfe5437-srv-cert\") pod \"catalog-operator-68c6474976-25n7p\" (UID: \"17db82f9-9c6e-492e-8163-958debfe5437\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25n7p" Dec 10 12:17:57 crc kubenswrapper[4689]: E1210 12:17:57.322429 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:17:57.82241403 +0000 UTC m=+145.610495168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.322476 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7fe8224-0ecc-4289-9a5d-107f9d6d00b5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lkfnf\" (UID: \"c7fe8224-0ecc-4289-9a5d-107f9d6d00b5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lkfnf" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.322777 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30c2b6ef-98b7-4411-a10b-a83925ed5ed0-config\") pod \"service-ca-operator-777779d784-pks9r\" (UID: \"30c2b6ef-98b7-4411-a10b-a83925ed5ed0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pks9r" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.322796 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c541a72-1484-41a6-b251-748306e1068d-config-volume\") pod \"dns-default-tzqjc\" (UID: \"5c541a72-1484-41a6-b251-748306e1068d\") " pod="openshift-dns/dns-default-tzqjc" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.323254 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7a4e09c2-7d4d-4078-b9a9-353f4c8063a4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9rk82\" (UID: \"7a4e09c2-7d4d-4078-b9a9-353f4c8063a4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9rk82" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.323351 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5dca2df7-7c4c-40bb-8302-d6c089fd5486-config-volume\") pod \"collect-profiles-29422815-gckmk\" (UID: \"5dca2df7-7c4c-40bb-8302-d6c089fd5486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422815-gckmk" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.323463 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/abad9838-aab6-4f7c-a042-83924f9e0809-mountpoint-dir\") pod \"csi-hostpathplugin-d7pkc\" (UID: \"abad9838-aab6-4f7c-a042-83924f9e0809\") " pod="hostpath-provisioner/csi-hostpathplugin-d7pkc" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.323827 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e1b5e59-64b6-4fe6-ad6f-fa1b521be5e6-config\") pod \"kube-controller-manager-operator-78b949d7b-nqgsm\" (UID: \"8e1b5e59-64b6-4fe6-ad6f-fa1b521be5e6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgsm" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.323948 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3a545142-a29d-4b5c-8c5f-33bf1fc1857f-signing-cabundle\") pod \"service-ca-9c57cc56f-kbbjg\" (UID: \"3a545142-a29d-4b5c-8c5f-33bf1fc1857f\") " pod="openshift-service-ca/service-ca-9c57cc56f-kbbjg" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.325102 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30c2b6ef-98b7-4411-a10b-a83925ed5ed0-serving-cert\") pod \"service-ca-operator-777779d784-pks9r\" (UID: \"30c2b6ef-98b7-4411-a10b-a83925ed5ed0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pks9r" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.325189 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/be303c23-fd28-48b4-9463-65c5167285fe-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qbqzn\" (UID: \"be303c23-fd28-48b4-9463-65c5167285fe\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbqzn" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.325714 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/abad9838-aab6-4f7c-a042-83924f9e0809-plugins-dir\") pod \"csi-hostpathplugin-d7pkc\" (UID: \"abad9838-aab6-4f7c-a042-83924f9e0809\") " pod="hostpath-provisioner/csi-hostpathplugin-d7pkc" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.326410 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/88d80bbe-a9ab-4c91-b0b2-485e106dd150-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9wlmb\" (UID: \"88d80bbe-a9ab-4c91-b0b2-485e106dd150\") " pod="openshift-marketplace/marketplace-operator-79b997595-9wlmb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.327715 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/38480cb6-15a9-450a-9efd-71a7d346ef7c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rn6lb\" (UID: \"38480cb6-15a9-450a-9efd-71a7d346ef7c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rn6lb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.331341 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5c163ba-fbd3-49b8-8b2f-ad0de7210e6f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-8s58v\" (UID: \"d5c163ba-fbd3-49b8-8b2f-ad0de7210e6f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8s58v" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.331841 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3a545142-a29d-4b5c-8c5f-33bf1fc1857f-signing-key\") pod \"service-ca-9c57cc56f-kbbjg\" (UID: \"3a545142-a29d-4b5c-8c5f-33bf1fc1857f\") " pod="openshift-service-ca/service-ca-9c57cc56f-kbbjg" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.332659 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/17db82f9-9c6e-492e-8163-958debfe5437-profile-collector-cert\") pod \"catalog-operator-68c6474976-25n7p\" (UID: \"17db82f9-9c6e-492e-8163-958debfe5437\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25n7p" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.333119 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/265037d4-b03f-4d1b-a643-5b5eb4e59738-metrics-certs\") pod \"router-default-5444994796-xmwb5\" (UID: \"265037d4-b03f-4d1b-a643-5b5eb4e59738\") " pod="openshift-ingress/router-default-5444994796-xmwb5" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.333323 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/265037d4-b03f-4d1b-a643-5b5eb4e59738-default-certificate\") pod \"router-default-5444994796-xmwb5\" (UID: \"265037d4-b03f-4d1b-a643-5b5eb4e59738\") " pod="openshift-ingress/router-default-5444994796-xmwb5" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.333378 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/abad9838-aab6-4f7c-a042-83924f9e0809-socket-dir\") pod \"csi-hostpathplugin-d7pkc\" (UID: \"abad9838-aab6-4f7c-a042-83924f9e0809\") " pod="hostpath-provisioner/csi-hostpathplugin-d7pkc" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.333498 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5dca2df7-7c4c-40bb-8302-d6c089fd5486-secret-volume\") pod \"collect-profiles-29422815-gckmk\" (UID: \"5dca2df7-7c4c-40bb-8302-d6c089fd5486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422815-gckmk" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.334251 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7a4e09c2-7d4d-4078-b9a9-353f4c8063a4-proxy-tls\") pod \"machine-config-controller-84d6567774-9rk82\" (UID: \"7a4e09c2-7d4d-4078-b9a9-353f4c8063a4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9rk82" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.334640 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1a410bee-f5f8-4ab2-9750-3afed0e312b2-metrics-tls\") pod \"ingress-operator-5b745b69d9-dpmvq\" (UID: \"1a410bee-f5f8-4ab2-9750-3afed0e312b2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dpmvq" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.334950 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3d09bb78-b81a-4558-8433-192e7fc846df-tmpfs\") pod \"packageserver-d55dfcdfc-km6qw\" (UID: \"3d09bb78-b81a-4558-8433-192e7fc846df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-km6qw" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.335369 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c541a72-1484-41a6-b251-748306e1068d-metrics-tls\") pod \"dns-default-tzqjc\" (UID: \"5c541a72-1484-41a6-b251-748306e1068d\") " pod="openshift-dns/dns-default-tzqjc" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.335713 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7fe8224-0ecc-4289-9a5d-107f9d6d00b5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lkfnf\" (UID: \"c7fe8224-0ecc-4289-9a5d-107f9d6d00b5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lkfnf" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.336690 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e9883663-c0ff-4454-ba8f-caed05881365-metrics-tls\") pod \"dns-operator-744455d44c-qc8tj\" (UID: \"e9883663-c0ff-4454-ba8f-caed05881365\") " pod="openshift-dns-operator/dns-operator-744455d44c-qc8tj" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.336796 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e1b5e59-64b6-4fe6-ad6f-fa1b521be5e6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nqgsm\" (UID: \"8e1b5e59-64b6-4fe6-ad6f-fa1b521be5e6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgsm" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.337302 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c1b90bc-688a-4b71-bb7b-a41d80d75f7b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-z922x\" (UID: \"0c1b90bc-688a-4b71-bb7b-a41d80d75f7b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z922x" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.337435 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52f0638b-0791-4c8a-b86e-8d6ad95a9c05-cert\") pod \"ingress-canary-2jzgr\" (UID: \"52f0638b-0791-4c8a-b86e-8d6ad95a9c05\") " pod="openshift-ingress-canary/ingress-canary-2jzgr" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.339463 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/265037d4-b03f-4d1b-a643-5b5eb4e59738-stats-auth\") pod \"router-default-5444994796-xmwb5\" (UID: \"265037d4-b03f-4d1b-a643-5b5eb4e59738\") " pod="openshift-ingress/router-default-5444994796-xmwb5" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.339913 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/88d80bbe-a9ab-4c91-b0b2-485e106dd150-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9wlmb\" (UID: \"88d80bbe-a9ab-4c91-b0b2-485e106dd150\") " pod="openshift-marketplace/marketplace-operator-79b997595-9wlmb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.340004 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms4wp" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.340717 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d09bb78-b81a-4558-8433-192e7fc846df-webhook-cert\") pod \"packageserver-d55dfcdfc-km6qw\" (UID: \"3d09bb78-b81a-4558-8433-192e7fc846df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-km6qw" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.341829 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ec880c8-61ec-4f66-976e-b656d28308a9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-7v6dg\" (UID: \"4ec880c8-61ec-4f66-976e-b656d28308a9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7v6dg" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.342053 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d09bb78-b81a-4558-8433-192e7fc846df-apiservice-cert\") pod \"packageserver-d55dfcdfc-km6qw\" (UID: \"3d09bb78-b81a-4558-8433-192e7fc846df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-km6qw" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.342141 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/cca4e9b1-a2a3-4d9b-a73c-e6d7d4552e97-certs\") pod \"machine-config-server-zjvxn\" (UID: \"cca4e9b1-a2a3-4d9b-a73c-e6d7d4552e97\") " pod="openshift-machine-config-operator/machine-config-server-zjvxn" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.342432 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a410bee-f5f8-4ab2-9750-3afed0e312b2-trusted-ca\") pod \"ingress-operator-5b745b69d9-dpmvq\" (UID: \"1a410bee-f5f8-4ab2-9750-3afed0e312b2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dpmvq" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.353500 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8ea56da0-e5ef-406e-bff9-b8652666745a-srv-cert\") pod \"olm-operator-6b444d44fb-69gng\" (UID: \"8ea56da0-e5ef-406e-bff9-b8652666745a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-69gng" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.360533 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ec880c8-61ec-4f66-976e-b656d28308a9-config\") pod \"kube-apiserver-operator-766d6c64bb-7v6dg\" (UID: \"4ec880c8-61ec-4f66-976e-b656d28308a9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7v6dg" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.361001 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8ea56da0-e5ef-406e-bff9-b8652666745a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-69gng\" (UID: \"8ea56da0-e5ef-406e-bff9-b8652666745a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-69gng" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.361478 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/cca4e9b1-a2a3-4d9b-a73c-e6d7d4552e97-node-bootstrap-token\") pod \"machine-config-server-zjvxn\" (UID: \"cca4e9b1-a2a3-4d9b-a73c-e6d7d4552e97\") " pod="openshift-machine-config-operator/machine-config-server-zjvxn" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.366839 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/56120454-e6eb-4d76-92ee-b8083c027c12-images\") pod \"machine-config-operator-74547568cd-gz5pm\" (UID: \"56120454-e6eb-4d76-92ee-b8083c027c12\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gz5pm" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.368414 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/56120454-e6eb-4d76-92ee-b8083c027c12-proxy-tls\") pod \"machine-config-operator-74547568cd-gz5pm\" (UID: \"56120454-e6eb-4d76-92ee-b8083c027c12\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gz5pm" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.368843 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-shthb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.370732 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/56120454-e6eb-4d76-92ee-b8083c027c12-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gz5pm\" (UID: \"56120454-e6eb-4d76-92ee-b8083c027c12\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gz5pm" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.372190 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-574vq\" (UniqueName: \"kubernetes.io/projected/abad9838-aab6-4f7c-a042-83924f9e0809-kube-api-access-574vq\") pod \"csi-hostpathplugin-d7pkc\" (UID: \"abad9838-aab6-4f7c-a042-83924f9e0809\") " pod="hostpath-provisioner/csi-hostpathplugin-d7pkc" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.386636 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mh4h\" (UniqueName: \"kubernetes.io/projected/5c541a72-1484-41a6-b251-748306e1068d-kube-api-access-8mh4h\") pod \"dns-default-tzqjc\" (UID: \"5c541a72-1484-41a6-b251-748306e1068d\") " pod="openshift-dns/dns-default-tzqjc" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.391285 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7jsv6"] Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.402340 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jssp\" (UniqueName: \"kubernetes.io/projected/38480cb6-15a9-450a-9efd-71a7d346ef7c-kube-api-access-4jssp\") pod \"control-plane-machine-set-operator-78cbb6b69f-rn6lb\" (UID: \"38480cb6-15a9-450a-9efd-71a7d346ef7c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rn6lb" Dec 10 12:17:57 crc kubenswrapper[4689]: W1210 12:17:57.412512 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e2364cc_104f_4237_9ad5_c121a1c3fba6.slice/crio-c35aa857b9f1323eeddbe3c49a31ec22c4daf3e6d186f3da7e568307139e4aff WatchSource:0}: Error finding container c35aa857b9f1323eeddbe3c49a31ec22c4daf3e6d186f3da7e568307139e4aff: Status 404 returned error can't find the container with id c35aa857b9f1323eeddbe3c49a31ec22c4daf3e6d186f3da7e568307139e4aff Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.420460 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:57 crc kubenswrapper[4689]: E1210 12:17:57.420883 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:17:57.920871661 +0000 UTC m=+145.708952799 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.424915 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r2qf\" (UniqueName: \"kubernetes.io/projected/52f0638b-0791-4c8a-b86e-8d6ad95a9c05-kube-api-access-7r2qf\") pod \"ingress-canary-2jzgr\" (UID: \"52f0638b-0791-4c8a-b86e-8d6ad95a9c05\") " pod="openshift-ingress-canary/ingress-canary-2jzgr" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.440918 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9s8ps"] Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.450369 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48x2f\" (UniqueName: \"kubernetes.io/projected/17db82f9-9c6e-492e-8163-958debfe5437-kube-api-access-48x2f\") pod \"catalog-operator-68c6474976-25n7p\" (UID: \"17db82f9-9c6e-492e-8163-958debfe5437\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25n7p" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.463703 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-2gdrs"] Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.467590 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-s8lmw"] Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.475699 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm47v\" (UniqueName: \"kubernetes.io/projected/117123b0-4673-4a20-8e0c-7bca235e3168-kube-api-access-lm47v\") pod \"migrator-59844c95c7-tcsl7\" (UID: \"117123b0-4673-4a20-8e0c-7bca235e3168\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tcsl7" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.488348 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt"] Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.489677 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrgmn\" (UniqueName: \"kubernetes.io/projected/be303c23-fd28-48b4-9463-65c5167285fe-kube-api-access-nrgmn\") pod \"package-server-manager-789f6589d5-qbqzn\" (UID: \"be303c23-fd28-48b4-9463-65c5167285fe\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbqzn" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.505358 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1a410bee-f5f8-4ab2-9750-3afed0e312b2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-dpmvq\" (UID: \"1a410bee-f5f8-4ab2-9750-3afed0e312b2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dpmvq" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.508535 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25n7p" Dec 10 12:17:57 crc kubenswrapper[4689]: W1210 12:17:57.513269 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9195e8d_4ad5_406b_b2fe_1c107df51433.slice/crio-48bc96990e6d7517efa4818f18c84db56c3ad78d7356db5dde798e76d8866156 WatchSource:0}: Error finding container 48bc96990e6d7517efa4818f18c84db56c3ad78d7356db5dde798e76d8866156: Status 404 returned error can't find the container with id 48bc96990e6d7517efa4818f18c84db56c3ad78d7356db5dde798e76d8866156 Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.521093 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:17:57 crc kubenswrapper[4689]: E1210 12:17:57.521330 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:17:58.021311278 +0000 UTC m=+145.809392416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.521578 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:57 crc kubenswrapper[4689]: E1210 12:17:57.521840 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:17:58.021832632 +0000 UTC m=+145.809913770 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.525637 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rn6ml"] Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.531306 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e1b5e59-64b6-4fe6-ad6f-fa1b521be5e6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nqgsm\" (UID: \"8e1b5e59-64b6-4fe6-ad6f-fa1b521be5e6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgsm" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.538536 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tcsl7" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.545894 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hrh7\" (UniqueName: \"kubernetes.io/projected/d5c163ba-fbd3-49b8-8b2f-ad0de7210e6f-kube-api-access-4hrh7\") pod \"multus-admission-controller-857f4d67dd-8s58v\" (UID: \"d5c163ba-fbd3-49b8-8b2f-ad0de7210e6f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8s58v" Dec 10 12:17:57 crc kubenswrapper[4689]: W1210 12:17:57.553810 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a3f8229_cf04_4e18_8dc3_c9a63c4b46b4.slice/crio-a134b972f2a059ae8149c2c7a106391fa1b7032d4cef3b5c98c68af49178b36f WatchSource:0}: Error finding container a134b972f2a059ae8149c2c7a106391fa1b7032d4cef3b5c98c68af49178b36f: Status 404 returned error can't find the container with id a134b972f2a059ae8149c2c7a106391fa1b7032d4cef3b5c98c68af49178b36f Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.563890 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czsbh\" (UniqueName: \"kubernetes.io/projected/5dca2df7-7c4c-40bb-8302-d6c089fd5486-kube-api-access-czsbh\") pod \"collect-profiles-29422815-gckmk\" (UID: \"5dca2df7-7c4c-40bb-8302-d6c089fd5486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422815-gckmk" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.572104 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-d7pkc" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.589528 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-tzqjc" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.590483 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2jzgr" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.593301 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-fz4nm"] Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.600326 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9s8ps" event={"ID":"ee767cde-d698-4c01-b221-33c158999e60","Type":"ContainerStarted","Data":"79e906509dbfcb43df4d01e4cfc0cb9a93910ebd6d7c5e41734275fed0a2a442"} Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.610282 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvc9n\" (UniqueName: \"kubernetes.io/projected/e9883663-c0ff-4454-ba8f-caed05881365-kube-api-access-pvc9n\") pod \"dns-operator-744455d44c-qc8tj\" (UID: \"e9883663-c0ff-4454-ba8f-caed05881365\") " pod="openshift-dns-operator/dns-operator-744455d44c-qc8tj" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.613823 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6hgp\" (UniqueName: \"kubernetes.io/projected/56120454-e6eb-4d76-92ee-b8083c027c12-kube-api-access-w6hgp\") pod \"machine-config-operator-74547568cd-gz5pm\" (UID: \"56120454-e6eb-4d76-92ee-b8083c027c12\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gz5pm" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.633909 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rn6lb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.634343 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:17:57 crc kubenswrapper[4689]: E1210 12:17:57.634790 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:17:58.134774888 +0000 UTC m=+145.922856026 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.641322 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-2gdrs" event={"ID":"86359667-d51a-449a-925d-375a1e98a8ca","Type":"ContainerStarted","Data":"0999b53cc91b2bb264726648eeaf521f34bd20541ae029f632331adc35b5435f"} Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.646913 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ggzn5"] Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.651193 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-kpp45"] Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.651527 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txwzz\" (UniqueName: \"kubernetes.io/projected/265037d4-b03f-4d1b-a643-5b5eb4e59738-kube-api-access-txwzz\") pod \"router-default-5444994796-xmwb5\" (UID: \"265037d4-b03f-4d1b-a643-5b5eb4e59738\") " pod="openshift-ingress/router-default-5444994796-xmwb5" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.655600 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qflwn\" (UniqueName: \"kubernetes.io/projected/30c2b6ef-98b7-4411-a10b-a83925ed5ed0-kube-api-access-qflwn\") pod \"service-ca-operator-777779d784-pks9r\" (UID: \"30c2b6ef-98b7-4411-a10b-a83925ed5ed0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pks9r" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.675833 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m94zd\" (UniqueName: \"kubernetes.io/projected/3a545142-a29d-4b5c-8c5f-33bf1fc1857f-kube-api-access-m94zd\") pod \"service-ca-9c57cc56f-kbbjg\" (UID: \"3a545142-a29d-4b5c-8c5f-33bf1fc1857f\") " pod="openshift-service-ca/service-ca-9c57cc56f-kbbjg" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.690027 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mx2mm"] Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.695410 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cpfdd" event={"ID":"1eb8e1ca-e0fa-448e-a502-a75ac460a5ca","Type":"ContainerStarted","Data":"183879faf32e1e495e9c1efc5da85c9f9632716b7224a2c6c37af156ad8a2b85"} Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.695509 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cpfdd" event={"ID":"1eb8e1ca-e0fa-448e-a502-a75ac460a5ca","Type":"ContainerStarted","Data":"592cf8e5697717422103e24ac284e8229ca19c938c1c9f7ba96b46cd220effed"} Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.701632 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv8w7\" (UniqueName: \"kubernetes.io/projected/3d09bb78-b81a-4558-8433-192e7fc846df-kube-api-access-nv8w7\") pod \"packageserver-d55dfcdfc-km6qw\" (UID: \"3d09bb78-b81a-4558-8433-192e7fc846df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-km6qw" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.708796 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" event={"ID":"d9195e8d-4ad5-406b-b2fe-1c107df51433","Type":"ContainerStarted","Data":"48bc96990e6d7517efa4818f18c84db56c3ad78d7356db5dde798e76d8866156"} Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.714630 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-s8lmw" event={"ID":"3eece420-e32c-4ea1-91f8-cc96bf144467","Type":"ContainerStarted","Data":"1afb400f412cc856e4c73dacf56fc65d531ff2b61d0af15d21aa22577999b53e"} Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.716688 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-htx2z"] Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.718074 4689 generic.go:334] "Generic (PLEG): container finished" podID="7dbd6f55-bc16-49a6-b385-1ba0b50003cd" containerID="3f65865d473d30617334d36e5594da9c0fd8c761a64602d0e8a4a082ba6ca113" exitCode=0 Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.718113 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g8snw" event={"ID":"7dbd6f55-bc16-49a6-b385-1ba0b50003cd","Type":"ContainerDied","Data":"3f65865d473d30617334d36e5594da9c0fd8c761a64602d0e8a4a082ba6ca113"} Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.725732 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7pv69"] Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.727016 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" event={"ID":"2e2364cc-104f-4237-9ad5-c121a1c3fba6","Type":"ContainerStarted","Data":"c35aa857b9f1323eeddbe3c49a31ec22c4daf3e6d186f3da7e568307139e4aff"} Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.726959 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smsvf\" (UniqueName: \"kubernetes.io/projected/0c1b90bc-688a-4b71-bb7b-a41d80d75f7b-kube-api-access-smsvf\") pod \"kube-storage-version-migrator-operator-b67b599dd-z922x\" (UID: \"0c1b90bc-688a-4b71-bb7b-a41d80d75f7b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z922x" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.728030 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-qc8tj" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.730516 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-shthb"] Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.732650 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rn6ml" event={"ID":"9a3f8229-cf04-4e18-8dc3-c9a63c4b46b4","Type":"ContainerStarted","Data":"a134b972f2a059ae8149c2c7a106391fa1b7032d4cef3b5c98c68af49178b36f"} Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.738366 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:57 crc kubenswrapper[4689]: E1210 12:17:57.738798 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:17:58.238785685 +0000 UTC m=+146.026866823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.739933 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24gv6\" (UniqueName: \"kubernetes.io/projected/cca4e9b1-a2a3-4d9b-a73c-e6d7d4552e97-kube-api-access-24gv6\") pod \"machine-config-server-zjvxn\" (UID: \"cca4e9b1-a2a3-4d9b-a73c-e6d7d4552e97\") " pod="openshift-machine-config-operator/machine-config-server-zjvxn" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.764805 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfg7x\" (UniqueName: \"kubernetes.io/projected/7a4e09c2-7d4d-4078-b9a9-353f4c8063a4-kube-api-access-jfg7x\") pod \"machine-config-controller-84d6567774-9rk82\" (UID: \"7a4e09c2-7d4d-4078-b9a9-353f4c8063a4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9rk82" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.764936 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xmwb5" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.765964 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms4wp"] Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.769435 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b77tc\" (UniqueName: \"kubernetes.io/projected/88d80bbe-a9ab-4c91-b0b2-485e106dd150-kube-api-access-b77tc\") pod \"marketplace-operator-79b997595-9wlmb\" (UID: \"88d80bbe-a9ab-4c91-b0b2-485e106dd150\") " pod="openshift-marketplace/marketplace-operator-79b997595-9wlmb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.769852 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gz5pm" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.783018 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vx8r\" (UniqueName: \"kubernetes.io/projected/8ea56da0-e5ef-406e-bff9-b8652666745a-kube-api-access-6vx8r\") pod \"olm-operator-6b444d44fb-69gng\" (UID: \"8ea56da0-e5ef-406e-bff9-b8652666745a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-69gng" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.784704 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-zjvxn" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.785396 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbqzn" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.788980 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422815-gckmk" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.797179 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-69gng" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.797605 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sfzm5"] Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.801522 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-8s58v" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.815838 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z922x" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.820804 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9wlmb" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.824668 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ec880c8-61ec-4f66-976e-b656d28308a9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-7v6dg\" (UID: \"4ec880c8-61ec-4f66-976e-b656d28308a9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7v6dg" Dec 10 12:17:57 crc kubenswrapper[4689]: W1210 12:17:57.827107 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f02b797_c51d_4614_b1d6_b2b3294342d0.slice/crio-cb555e1f710c35452b6be726b1d761d47c6239119f9c4bafbfafde6a3b2bdb66 WatchSource:0}: Error finding container cb555e1f710c35452b6be726b1d761d47c6239119f9c4bafbfafde6a3b2bdb66: Status 404 returned error can't find the container with id cb555e1f710c35452b6be726b1d761d47c6239119f9c4bafbfafde6a3b2bdb66 Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.831584 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79v8n\" (UniqueName: \"kubernetes.io/projected/1a410bee-f5f8-4ab2-9750-3afed0e312b2-kube-api-access-79v8n\") pod \"ingress-operator-5b745b69d9-dpmvq\" (UID: \"1a410bee-f5f8-4ab2-9750-3afed0e312b2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dpmvq" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.832460 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgsm" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.841219 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:17:57 crc kubenswrapper[4689]: E1210 12:17:57.842233 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:17:58.342216808 +0000 UTC m=+146.130297946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.845044 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pks9r" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.851597 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-kbbjg" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.856356 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-km6qw" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.872521 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7fe8224-0ecc-4289-9a5d-107f9d6d00b5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lkfnf\" (UID: \"c7fe8224-0ecc-4289-9a5d-107f9d6d00b5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lkfnf" Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.897107 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25n7p"] Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.926444 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-d7pkc"] Dec 10 12:17:57 crc kubenswrapper[4689]: I1210 12:17:57.942722 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:57 crc kubenswrapper[4689]: E1210 12:17:57.943328 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:17:58.443317062 +0000 UTC m=+146.231398200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.036824 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dpmvq" Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.041877 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-tcsl7"] Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.041953 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7v6dg" Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.044555 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:17:58 crc kubenswrapper[4689]: E1210 12:17:58.044677 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:17:58.544656072 +0000 UTC m=+146.332737210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.045543 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:58 crc kubenswrapper[4689]: E1210 12:17:58.048513 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:17:58.548470547 +0000 UTC m=+146.336551685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.050418 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9rk82" Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.062935 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lkfnf" Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.069649 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-tzqjc"] Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.098433 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-2jzgr"] Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.146681 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:17:58 crc kubenswrapper[4689]: E1210 12:17:58.147668 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:17:58.647652454 +0000 UTC m=+146.435733592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:58 crc kubenswrapper[4689]: W1210 12:17:58.179216 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod117123b0_4673_4a20_8e0c_7bca235e3168.slice/crio-8cd1aa5845b8c2899f33f825f942592760b6c18703e12f278bfd0095735147af WatchSource:0}: Error finding container 8cd1aa5845b8c2899f33f825f942592760b6c18703e12f278bfd0095735147af: Status 404 returned error can't find the container with id 8cd1aa5845b8c2899f33f825f942592760b6c18703e12f278bfd0095735147af Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.249269 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:58 crc kubenswrapper[4689]: E1210 12:17:58.250184 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:17:58.750172724 +0000 UTC m=+146.538253862 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.350920 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:17:58 crc kubenswrapper[4689]: E1210 12:17:58.351209 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:17:58.851194906 +0000 UTC m=+146.639276044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.395651 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qc8tj"] Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.395699 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rn6lb"] Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.455633 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:58 crc kubenswrapper[4689]: E1210 12:17:58.456109 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:17:58.956098895 +0000 UTC m=+146.744180033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.517129 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422815-gckmk"] Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.540252 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgsm"] Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.556595 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:17:58 crc kubenswrapper[4689]: E1210 12:17:58.556904 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:17:59.056889402 +0000 UTC m=+146.844970540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.559453 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-69gng"] Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.628140 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-pks9r"] Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.638504 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z922x"] Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.657468 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:58 crc kubenswrapper[4689]: E1210 12:17:58.657769 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:17:59.157754261 +0000 UTC m=+146.945835399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.733608 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9rk82"] Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.743035 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-km6qw"] Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.758269 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:17:58 crc kubenswrapper[4689]: E1210 12:17:58.758606 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:17:59.258589089 +0000 UTC m=+147.046670227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:58 crc kubenswrapper[4689]: W1210 12:17:58.783763 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30c2b6ef_98b7_4411_a10b_a83925ed5ed0.slice/crio-02fa16bfc52f0d4770200bbf433f7ef93e99f248d815b1e810b7d23e4186c397 WatchSource:0}: Error finding container 02fa16bfc52f0d4770200bbf433f7ef93e99f248d815b1e810b7d23e4186c397: Status 404 returned error can't find the container with id 02fa16bfc52f0d4770200bbf433f7ef93e99f248d815b1e810b7d23e4186c397 Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.812807 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gz5pm"] Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.859739 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:58 crc kubenswrapper[4689]: E1210 12:17:58.859949 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:17:59.359932139 +0000 UTC m=+147.148013277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.861507 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tcsl7" event={"ID":"117123b0-4673-4a20-8e0c-7bca235e3168","Type":"ContainerStarted","Data":"8cd1aa5845b8c2899f33f825f942592760b6c18703e12f278bfd0095735147af"} Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.868948 4689 generic.go:334] "Generic (PLEG): container finished" podID="d9195e8d-4ad5-406b-b2fe-1c107df51433" containerID="561f90cf9077cadb10e0940da7a47774e8eed7c73e29853e744f998c53c59eca" exitCode=0 Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.869035 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" event={"ID":"d9195e8d-4ad5-406b-b2fe-1c107df51433","Type":"ContainerDied","Data":"561f90cf9077cadb10e0940da7a47774e8eed7c73e29853e744f998c53c59eca"} Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.913508 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-dpmvq"] Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.961683 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:17:58 crc kubenswrapper[4689]: E1210 12:17:58.962416 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:17:59.462400598 +0000 UTC m=+147.250481736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:58 crc kubenswrapper[4689]: I1210 12:17:58.980701 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25n7p" event={"ID":"17db82f9-9c6e-492e-8163-958debfe5437","Type":"ContainerStarted","Data":"c7e0c75c8db4f449af1076bce16c9dd16354925cfbd6e3b8fa667f6b52227562"} Dec 10 12:17:59 crc kubenswrapper[4689]: W1210 12:17:59.000778 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a410bee_f5f8_4ab2_9750_3afed0e312b2.slice/crio-9ceeb1c97d4813676a30c0a71d4083006801de0faa1fa6784eaf95efa5967f47 WatchSource:0}: Error finding container 9ceeb1c97d4813676a30c0a71d4083006801de0faa1fa6784eaf95efa5967f47: Status 404 returned error can't find the container with id 9ceeb1c97d4813676a30c0a71d4083006801de0faa1fa6784eaf95efa5967f47 Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.024417 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9wlmb"] Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.043508 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" event={"ID":"2e2364cc-104f-4237-9ad5-c121a1c3fba6","Type":"ContainerStarted","Data":"ee8f8cc2e8a4c7b84e05bdc36b4ce7838702e7ddd738636ac5333b44b159c594"} Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.044318 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.051716 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rn6ml" event={"ID":"9a3f8229-cf04-4e18-8dc3-c9a63c4b46b4","Type":"ContainerStarted","Data":"562ca40a45c9b1fdc0d133bad4cc00062383e9c59774612c9a91d4e15f55f7bb"} Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.074007 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.076448 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:59 crc kubenswrapper[4689]: E1210 12:17:59.077985 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:17:59.577956529 +0000 UTC m=+147.366037657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.090495 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sfzm5" event={"ID":"d8990fd9-cdfb-4c98-80b3-794f1b371ee5","Type":"ContainerStarted","Data":"e32378ecfee705d280f3a9f3c8354d53b86c847258469a7da78722ad7f803b0f"} Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.090538 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sfzm5" event={"ID":"d8990fd9-cdfb-4c98-80b3-794f1b371ee5","Type":"ContainerStarted","Data":"c7bc4dd4f53b98fbd0910a84c72954f3ee2a3947e6307b6a639504d73f291e65"} Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.091526 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sfzm5" Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.169986 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lkfnf"] Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.173292 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-s8lmw" event={"ID":"3eece420-e32c-4ea1-91f8-cc96bf144467","Type":"ContainerStarted","Data":"a84484ab5a7bb5f9900a51e95f78d27674f7a3e1c6157ff9d287da1ee2de003c"} Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.173341 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-s8lmw" event={"ID":"3eece420-e32c-4ea1-91f8-cc96bf144467","Type":"ContainerStarted","Data":"231d5fb6dc0c1bcbc6183123884df09da4dc3ec9c406caa1e6fda8f44d409d6f"} Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.175036 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbqzn"] Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.176244 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kbbjg"] Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.178619 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:17:59 crc kubenswrapper[4689]: E1210 12:17:59.184473 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:17:59.684440087 +0000 UTC m=+147.472521225 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.185006 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-69gng" event={"ID":"8ea56da0-e5ef-406e-bff9-b8652666745a","Type":"ContainerStarted","Data":"60ad506d2aaba47e081218d175d5a030566c5515b6b4d00e3594199becf2d5c1"} Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.196071 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qc8tj" event={"ID":"e9883663-c0ff-4454-ba8f-caed05881365","Type":"ContainerStarted","Data":"b48ce95b413e08a91740bab8dd92e682b2791fbf6b5315e97fa6af2a55516b63"} Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.206890 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rn6lb" event={"ID":"38480cb6-15a9-450a-9efd-71a7d346ef7c","Type":"ContainerStarted","Data":"597379e94d7db17e53298b41bc179eb739b5d4f63a29ec325fe9306c527dd2fd"} Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.213824 4689 generic.go:334] "Generic (PLEG): container finished" podID="b81b90f7-72e4-4a8d-881f-d117310a4a25" containerID="db884c4b08290ab92c6412ef0604f732f22fdc2ba10b4876dfaed0c3e6c87fc0" exitCode=0 Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.214660 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz4nm" event={"ID":"b81b90f7-72e4-4a8d-881f-d117310a4a25","Type":"ContainerDied","Data":"db884c4b08290ab92c6412ef0604f732f22fdc2ba10b4876dfaed0c3e6c87fc0"} Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.214702 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz4nm" event={"ID":"b81b90f7-72e4-4a8d-881f-d117310a4a25","Type":"ContainerStarted","Data":"dad52f35102f5b8430446bd58e3f4f2ac31b784f5017a3231bcf936595d03dfc"} Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.224228 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7v6dg"] Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.226082 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-8s58v"] Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.264094 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-zjvxn" event={"ID":"cca4e9b1-a2a3-4d9b-a73c-e6d7d4552e97","Type":"ContainerStarted","Data":"0d66f84104d27c598f8028ad9e58607d4d2feafa3a539b6d04863047727e5df5"} Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.283294 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:59 crc kubenswrapper[4689]: E1210 12:17:59.285078 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:17:59.78506272 +0000 UTC m=+147.573143908 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.305684 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2jzgr" event={"ID":"52f0638b-0791-4c8a-b86e-8d6ad95a9c05","Type":"ContainerStarted","Data":"bca645c35a15f39927b669c8ac42a190c87c24688d0ca32c72b0e325d4a903ba"} Dec 10 12:17:59 crc kubenswrapper[4689]: W1210 12:17:59.315132 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe303c23_fd28_48b4_9463_65c5167285fe.slice/crio-41cafd1d787fb6c385d10f923c676a9a35ba81f2bd00b3722864e6bbe3b2a1b9 WatchSource:0}: Error finding container 41cafd1d787fb6c385d10f923c676a9a35ba81f2bd00b3722864e6bbe3b2a1b9: Status 404 returned error can't find the container with id 41cafd1d787fb6c385d10f923c676a9a35ba81f2bd00b3722864e6bbe3b2a1b9 Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.321960 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-2gdrs" event={"ID":"86359667-d51a-449a-925d-375a1e98a8ca","Type":"ContainerStarted","Data":"f7e384bf4413dc9c34ce1ab8880b7c9d5961fc59416d2a73fd8d42affa0e7993"} Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.345470 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgsm" event={"ID":"8e1b5e59-64b6-4fe6-ad6f-fa1b521be5e6","Type":"ContainerStarted","Data":"73c6477a1f4943184a3bf0d43d78dfe5ecdc788610cf3a11ffd565def018b76c"} Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.351015 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-d7pkc" event={"ID":"abad9838-aab6-4f7c-a042-83924f9e0809","Type":"ContainerStarted","Data":"01471a4bb4505c40476ac36b2f872dff37940e03118d2e2ca65c9fe42f3746b3"} Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.353530 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xmwb5" event={"ID":"265037d4-b03f-4d1b-a643-5b5eb4e59738","Type":"ContainerStarted","Data":"f6f8543500d963ac38775a2c35958e499c3aab4dc38863353309c1bca1643e4f"} Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.387210 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:17:59 crc kubenswrapper[4689]: E1210 12:17:59.387421 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:17:59.887396815 +0000 UTC m=+147.675477953 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.387793 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:59 crc kubenswrapper[4689]: E1210 12:17:59.389461 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:17:59.889452726 +0000 UTC m=+147.677533864 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.393360 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-htx2z" event={"ID":"fd353a2b-c325-44b6-9e25-6a4c39213f9e","Type":"ContainerStarted","Data":"643d7e2010835708ae436decbcb29be3da0eb85eab397559f4e36fc3a3bb1aaa"} Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.407433 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-htx2z" event={"ID":"fd353a2b-c325-44b6-9e25-6a4c39213f9e","Type":"ContainerStarted","Data":"d4a3793dfe44be1c347597ab7400980cf4ffd89560f680c77bb6b99443b846c9"} Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.407565 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-htx2z" Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.414788 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-htx2z" Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.417929 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mx2mm" event={"ID":"481fe942-34c8-42d9-9925-a674da9ca453","Type":"ContainerStarted","Data":"c484259bcbee543186bd531f68d371be6941a77784ad40eecc9c3afd3e8d5c9c"} Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.417986 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mx2mm" event={"ID":"481fe942-34c8-42d9-9925-a674da9ca453","Type":"ContainerStarted","Data":"858378e2c27b46d468173d62ea597c14642a0c813fb8c7e907865001872562e9"} Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.422161 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tzqjc" event={"ID":"5c541a72-1484-41a6-b251-748306e1068d","Type":"ContainerStarted","Data":"7e2c755a31ac91f09f2e244bb216d7f4e0e877bada960229f1a18864f459929a"} Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.432626 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-kpp45" event={"ID":"cd45363e-de7e-4e91-afe1-f82948764f4f","Type":"ContainerStarted","Data":"1a7d4737a710e1b2100bbd8190d2ba3b106be33639e15f950b897ee94410303e"} Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.432689 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-kpp45" event={"ID":"cd45363e-de7e-4e91-afe1-f82948764f4f","Type":"ContainerStarted","Data":"16e59a01aa329c77bb7099420b1f0109ec434ca0a87cb200c404de3b5f8c58b2"} Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.434044 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-kpp45" Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.442778 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-kpp45 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.442845 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kpp45" podUID="cd45363e-de7e-4e91-afe1-f82948764f4f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.443704 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-2jzgr" podStartSLOduration=5.443689304 podStartE2EDuration="5.443689304s" podCreationTimestamp="2025-12-10 12:17:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:17:59.442606908 +0000 UTC m=+147.230688046" watchObservedRunningTime="2025-12-10 12:17:59.443689304 +0000 UTC m=+147.231770442" Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.444860 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9s8ps" event={"ID":"ee767cde-d698-4c01-b221-33c158999e60","Type":"ContainerStarted","Data":"d6b376f542a0c4b8bf88f46122cb515ce1f7e0abf62053f50a340c26873a392f"} Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.467127 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cpfdd" event={"ID":"1eb8e1ca-e0fa-448e-a502-a75ac460a5ca","Type":"ContainerStarted","Data":"e7a1857d09adaa808ad6c679b95fafe9f463816b40b755c018d9a7eee735a16f"} Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.473123 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422815-gckmk" event={"ID":"5dca2df7-7c4c-40bb-8302-d6c089fd5486","Type":"ContainerStarted","Data":"3cf05882f5625c66a53e8b93167ef8087f6183b91b00710a64b4105ffe095a53"} Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.488544 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:17:59 crc kubenswrapper[4689]: E1210 12:17:59.489324 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:17:59.9893022 +0000 UTC m=+147.777383338 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.495338 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-htx2z" podStartSLOduration=118.495324978 podStartE2EDuration="1m58.495324978s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:17:59.491909764 +0000 UTC m=+147.279990912" watchObservedRunningTime="2025-12-10 12:17:59.495324978 +0000 UTC m=+147.283406116" Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.505523 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-shthb" event={"ID":"27f4e33e-8aae-44e8-918c-f25bf7eb9e37","Type":"ContainerStarted","Data":"97bfb0505fc1dc793e97c5b2b40d02d80419a9adae1cc0f6e7285662b31cd865"} Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.547264 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7pv69" event={"ID":"762dce0f-3636-41ab-8d78-1b69e0fa714a","Type":"ContainerStarted","Data":"d2d2dc4cc1ca62f33d7ea15e39eccb3f8a9664c77dc3826b8afccd6ff131d068"} Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.578123 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sfzm5" Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.589630 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:59 crc kubenswrapper[4689]: E1210 12:17:59.590119 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:18:00.090104047 +0000 UTC m=+147.878185185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.610196 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-ggzn5" event={"ID":"2aed137f-a24c-42c9-a0af-9ada3eeffa88","Type":"ContainerStarted","Data":"68473db0a3940fe8c0f9b20d5b6326444e9f7c52507a3eb9150d57da7fc542e9"} Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.610234 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-ggzn5" event={"ID":"2aed137f-a24c-42c9-a0af-9ada3eeffa88","Type":"ContainerStarted","Data":"bf7e5fdc0792d20265f759b7b9c91d965e852d143a3f8827e254fb6c3d28d0bb"} Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.610983 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-ggzn5" Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.623043 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms4wp" event={"ID":"8f02b797-c51d-4614-b1d6-b2b3294342d0","Type":"ContainerStarted","Data":"cb555e1f710c35452b6be726b1d761d47c6239119f9c4bafbfafde6a3b2bdb66"} Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.629747 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-2gdrs" podStartSLOduration=119.629731505 podStartE2EDuration="1m59.629731505s" podCreationTimestamp="2025-12-10 12:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:17:59.62873641 +0000 UTC m=+147.416817538" watchObservedRunningTime="2025-12-10 12:17:59.629731505 +0000 UTC m=+147.417812643" Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.631605 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-ggzn5" Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.699321 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:17:59 crc kubenswrapper[4689]: E1210 12:17:59.701444 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:18:00.201425143 +0000 UTC m=+147.989506281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.768084 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-xmwb5" Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.770257 4689 patch_prober.go:28] interesting pod/router-default-5444994796-xmwb5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 12:17:59 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 10 12:17:59 crc kubenswrapper[4689]: [+]process-running ok Dec 10 12:17:59 crc kubenswrapper[4689]: healthz check failed Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.770302 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xmwb5" podUID="265037d4-b03f-4d1b-a643-5b5eb4e59738" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.802260 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:59 crc kubenswrapper[4689]: E1210 12:17:59.803679 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:18:00.303667706 +0000 UTC m=+148.091748844 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.819543 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sfzm5" podStartSLOduration=118.819526008 podStartE2EDuration="1m58.819526008s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:17:59.75436093 +0000 UTC m=+147.542442068" watchObservedRunningTime="2025-12-10 12:17:59.819526008 +0000 UTC m=+147.607607146" Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.895366 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-s8lmw" podStartSLOduration=118.895352679 podStartE2EDuration="1m58.895352679s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:17:59.893354209 +0000 UTC m=+147.681435357" watchObservedRunningTime="2025-12-10 12:17:59.895352679 +0000 UTC m=+147.683433817" Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.895719 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" podStartSLOduration=119.895715038 podStartE2EDuration="1m59.895715038s" podCreationTimestamp="2025-12-10 12:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:17:59.82123225 +0000 UTC m=+147.609313388" watchObservedRunningTime="2025-12-10 12:17:59.895715038 +0000 UTC m=+147.683796176" Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.904954 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:17:59 crc kubenswrapper[4689]: E1210 12:17:59.905301 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:18:00.405272374 +0000 UTC m=+148.193353532 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.905459 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:17:59 crc kubenswrapper[4689]: E1210 12:17:59.905816 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:18:00.405808897 +0000 UTC m=+148.193890035 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:17:59 crc kubenswrapper[4689]: I1210 12:17:59.930435 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-zjvxn" podStartSLOduration=5.930420304 podStartE2EDuration="5.930420304s" podCreationTimestamp="2025-12-10 12:17:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:17:59.928667861 +0000 UTC m=+147.716748999" watchObservedRunningTime="2025-12-10 12:17:59.930420304 +0000 UTC m=+147.718501442" Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.010053 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rn6ml" podStartSLOduration=120.010038759 podStartE2EDuration="2m0.010038759s" podCreationTimestamp="2025-12-10 12:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:17:59.955708698 +0000 UTC m=+147.743789836" watchObservedRunningTime="2025-12-10 12:18:00.010038759 +0000 UTC m=+147.798119897" Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.011054 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-xmwb5" podStartSLOduration=119.011050815 podStartE2EDuration="1m59.011050815s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:00.008470221 +0000 UTC m=+147.796551359" watchObservedRunningTime="2025-12-10 12:18:00.011050815 +0000 UTC m=+147.799131953" Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.016807 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:18:00 crc kubenswrapper[4689]: E1210 12:18:00.032181 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:18:00.532150635 +0000 UTC m=+148.320231763 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.032344 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:18:00 crc kubenswrapper[4689]: E1210 12:18:00.032639 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:18:00.532631317 +0000 UTC m=+148.320712455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.042849 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cpfdd" podStartSLOduration=120.042834218 podStartE2EDuration="2m0.042834218s" podCreationTimestamp="2025-12-10 12:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:00.042269865 +0000 UTC m=+147.830351003" watchObservedRunningTime="2025-12-10 12:18:00.042834218 +0000 UTC m=+147.830915356" Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.073531 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-9s8ps" podStartSLOduration=119.073516295 podStartE2EDuration="1m59.073516295s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:00.071919946 +0000 UTC m=+147.860001084" watchObservedRunningTime="2025-12-10 12:18:00.073516295 +0000 UTC m=+147.861597433" Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.131904 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms4wp" podStartSLOduration=119.131888746 podStartE2EDuration="1m59.131888746s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:00.130768758 +0000 UTC m=+147.918849896" watchObservedRunningTime="2025-12-10 12:18:00.131888746 +0000 UTC m=+147.919969884" Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.134860 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:18:00 crc kubenswrapper[4689]: E1210 12:18:00.135123 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:18:00.635109055 +0000 UTC m=+148.423190193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.198503 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-ggzn5" podStartSLOduration=119.198481639 podStartE2EDuration="1m59.198481639s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:00.187441737 +0000 UTC m=+147.975522875" watchObservedRunningTime="2025-12-10 12:18:00.198481639 +0000 UTC m=+147.986562777" Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.239206 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:18:00 crc kubenswrapper[4689]: E1210 12:18:00.239760 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:18:00.739748197 +0000 UTC m=+148.527829325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.258624 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-kpp45" podStartSLOduration=119.258606302 podStartE2EDuration="1m59.258606302s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:00.255802003 +0000 UTC m=+148.043883141" watchObservedRunningTime="2025-12-10 12:18:00.258606302 +0000 UTC m=+148.046687430" Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.342425 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:18:00 crc kubenswrapper[4689]: E1210 12:18:00.342727 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:18:00.842712538 +0000 UTC m=+148.630793676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.370597 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-shthb" podStartSLOduration=119.370582775 podStartE2EDuration="1m59.370582775s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:00.327197845 +0000 UTC m=+148.115278983" watchObservedRunningTime="2025-12-10 12:18:00.370582775 +0000 UTC m=+148.158663913" Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.371258 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mx2mm" podStartSLOduration=119.371252822 podStartE2EDuration="1m59.371252822s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:00.369279973 +0000 UTC m=+148.157361111" watchObservedRunningTime="2025-12-10 12:18:00.371252822 +0000 UTC m=+148.159333960" Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.443158 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:18:00 crc kubenswrapper[4689]: E1210 12:18:00.443480 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:18:00.943468424 +0000 UTC m=+148.731549562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.545610 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:18:00 crc kubenswrapper[4689]: E1210 12:18:00.546187 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:18:01.046170369 +0000 UTC m=+148.834251507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.644839 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-zjvxn" event={"ID":"cca4e9b1-a2a3-4d9b-a73c-e6d7d4552e97","Type":"ContainerStarted","Data":"e3f33f350b74c9c1777a29158ffaa3e09f701f1282090d254890adeb86098f8d"} Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.646515 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:18:00 crc kubenswrapper[4689]: E1210 12:18:00.646858 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:18:01.146846603 +0000 UTC m=+148.934927741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.648468 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7pv69" event={"ID":"762dce0f-3636-41ab-8d78-1b69e0fa714a","Type":"ContainerStarted","Data":"846ca008079aba931d644fcdc875a176adc7f8b648a1ce07d59920784b886c50"} Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.648496 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7pv69" event={"ID":"762dce0f-3636-41ab-8d78-1b69e0fa714a","Type":"ContainerStarted","Data":"1a158397b279719af9cc51c15948e1ef1adb22f59d9aa012e75393fee119caf8"} Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.657034 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-d7pkc" event={"ID":"abad9838-aab6-4f7c-a042-83924f9e0809","Type":"ContainerStarted","Data":"29aefc798686dea632e0e1044f5228de4306b6c500d60d21fc134a2e4fcc7945"} Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.676713 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-kbbjg" event={"ID":"3a545142-a29d-4b5c-8c5f-33bf1fc1857f","Type":"ContainerStarted","Data":"20d189b074956603f8b2fc24d2522350b5656f5ecbf46dd98e3247fd5c7e7039"} Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.676750 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-kbbjg" event={"ID":"3a545142-a29d-4b5c-8c5f-33bf1fc1857f","Type":"ContainerStarted","Data":"79ab33a9ca0f2f9082e898054639042ce006609fecea1b89939d300a629cfcd7"} Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.719630 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g8snw" event={"ID":"7dbd6f55-bc16-49a6-b385-1ba0b50003cd","Type":"ContainerStarted","Data":"c617f84b7a5d2d609bac82f87694bd0147580794697041ba97f3d0202cadb710"} Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.767092 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:18:00 crc kubenswrapper[4689]: E1210 12:18:00.768138 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:18:01.268108104 +0000 UTC m=+149.056189242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.772073 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.777890 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422815-gckmk" event={"ID":"5dca2df7-7c4c-40bb-8302-d6c089fd5486","Type":"ContainerStarted","Data":"d554ce9f44128013a4ba643a6b440f84c6a3b3b7ede2530107890b9be885a80b"} Dec 10 12:18:00 crc kubenswrapper[4689]: E1210 12:18:00.788122 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:18:01.288105529 +0000 UTC m=+149.076186667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.790994 4689 patch_prober.go:28] interesting pod/router-default-5444994796-xmwb5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 12:18:00 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 10 12:18:00 crc kubenswrapper[4689]: [+]process-running ok Dec 10 12:18:00 crc kubenswrapper[4689]: healthz check failed Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.791059 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xmwb5" podUID="265037d4-b03f-4d1b-a643-5b5eb4e59738" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.795760 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7pv69" podStartSLOduration=119.795609013 podStartE2EDuration="1m59.795609013s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:00.704213078 +0000 UTC m=+148.492294216" watchObservedRunningTime="2025-12-10 12:18:00.795609013 +0000 UTC m=+148.583690151" Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.796643 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-kbbjg" podStartSLOduration=119.796630679 podStartE2EDuration="1m59.796630679s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:00.780726386 +0000 UTC m=+148.568807524" watchObservedRunningTime="2025-12-10 12:18:00.796630679 +0000 UTC m=+148.584711817" Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.811128 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rn6lb" event={"ID":"38480cb6-15a9-450a-9efd-71a7d346ef7c","Type":"ContainerStarted","Data":"a2ee22f20bb35870348bf10c0ae78218ab7a4955a032eae810cbbce5a22e8654"} Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.882857 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:18:00 crc kubenswrapper[4689]: E1210 12:18:00.883822 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:18:01.38380473 +0000 UTC m=+149.171885868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.886959 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29422815-gckmk" podStartSLOduration=119.867950259 podStartE2EDuration="1m59.867950259s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:00.86600529 +0000 UTC m=+148.654086418" watchObservedRunningTime="2025-12-10 12:18:00.867950259 +0000 UTC m=+148.656031397" Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.922957 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-shthb" event={"ID":"27f4e33e-8aae-44e8-918c-f25bf7eb9e37","Type":"ContainerStarted","Data":"ba76b1d5f79b83d4900dc13f72468bd8660060ea426361dd69d5ed0b93de2e79"} Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.946174 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-69gng" event={"ID":"8ea56da0-e5ef-406e-bff9-b8652666745a","Type":"ContainerStarted","Data":"aea3c107741c5bd27490dbd1e52061ae0bc2967feaf7d8db952f817958d05ad0"} Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.947325 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-69gng" Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.953977 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2jzgr" event={"ID":"52f0638b-0791-4c8a-b86e-8d6ad95a9c05","Type":"ContainerStarted","Data":"248ee04372ae77a047b007a6fef38948f4706a1a6fb695ed02961435f0c94b00"} Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.960356 4689 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-69gng container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.960419 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-69gng" podUID="8ea56da0-e5ef-406e-bff9-b8652666745a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.974885 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-8s58v" event={"ID":"d5c163ba-fbd3-49b8-8b2f-ad0de7210e6f","Type":"ContainerStarted","Data":"2f0073f9d8e7bbe3d4e8fca707b7a3bf4bbebc15724750658e5a62933b58979e"} Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.983775 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:18:00 crc kubenswrapper[4689]: E1210 12:18:00.984136 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:18:01.484124606 +0000 UTC m=+149.272205744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.985472 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z922x" event={"ID":"0c1b90bc-688a-4b71-bb7b-a41d80d75f7b","Type":"ContainerStarted","Data":"fee0823c7eaafb44735487ff2e9813c6d1dc400ced05ebe4e09c25b7521533e7"} Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.985544 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z922x" event={"ID":"0c1b90bc-688a-4b71-bb7b-a41d80d75f7b","Type":"ContainerStarted","Data":"6cdc4fb202a7b913816b45bcddc4687ab04b5302fd69ed8c08ae0486a459c4d5"} Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.997203 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qc8tj" event={"ID":"e9883663-c0ff-4454-ba8f-caed05881365","Type":"ContainerStarted","Data":"25a3b6356d1d4d91a9a28a84d0fb1abf6ecd30e19d2914f95f2e84ba1d597cd5"} Dec 10 12:18:00 crc kubenswrapper[4689]: I1210 12:18:00.997988 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rn6lb" podStartSLOduration=119.997961896 podStartE2EDuration="1m59.997961896s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:00.906777026 +0000 UTC m=+148.694858164" watchObservedRunningTime="2025-12-10 12:18:00.997961896 +0000 UTC m=+148.786043034" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.043224 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgsm" event={"ID":"8e1b5e59-64b6-4fe6-ad6f-fa1b521be5e6","Type":"ContainerStarted","Data":"2ba12a410cdc906dcec9f22447a725b1e60d1876c238bf25112a0cb8b0d1dc7d"} Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.043540 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-69gng" podStartSLOduration=120.043529311 podStartE2EDuration="2m0.043529311s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:00.993637709 +0000 UTC m=+148.781718847" watchObservedRunningTime="2025-12-10 12:18:01.043529311 +0000 UTC m=+148.831610449" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.051073 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ks26t"] Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.051182 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z922x" podStartSLOduration=120.051170539 podStartE2EDuration="2m0.051170539s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:01.050862752 +0000 UTC m=+148.838943890" watchObservedRunningTime="2025-12-10 12:18:01.051170539 +0000 UTC m=+148.839251677" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.052056 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ks26t" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.055535 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.056607 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pks9r" event={"ID":"30c2b6ef-98b7-4411-a10b-a83925ed5ed0","Type":"ContainerStarted","Data":"84eb598d2cda7e2d37abbf9c9303034751ac1ca709173e001c245b757b8cc20a"} Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.056638 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pks9r" event={"ID":"30c2b6ef-98b7-4411-a10b-a83925ed5ed0","Type":"ContainerStarted","Data":"02fa16bfc52f0d4770200bbf433f7ef93e99f248d815b1e810b7d23e4186c397"} Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.059010 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms4wp" event={"ID":"8f02b797-c51d-4614-b1d6-b2b3294342d0","Type":"ContainerStarted","Data":"8a713b0490b18289dbb46a34c1a285dedc60006d598b3dd05a7f0e624825d481"} Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.086866 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.087112 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bccn\" (UniqueName: \"kubernetes.io/projected/c8d409aa-8f6d-4ed5-816c-e572e371d425-kube-api-access-9bccn\") pod \"certified-operators-ks26t\" (UID: \"c8d409aa-8f6d-4ed5-816c-e572e371d425\") " pod="openshift-marketplace/certified-operators-ks26t" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.087222 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8d409aa-8f6d-4ed5-816c-e572e371d425-utilities\") pod \"certified-operators-ks26t\" (UID: \"c8d409aa-8f6d-4ed5-816c-e572e371d425\") " pod="openshift-marketplace/certified-operators-ks26t" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.087261 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8d409aa-8f6d-4ed5-816c-e572e371d425-catalog-content\") pod \"certified-operators-ks26t\" (UID: \"c8d409aa-8f6d-4ed5-816c-e572e371d425\") " pod="openshift-marketplace/certified-operators-ks26t" Dec 10 12:18:01 crc kubenswrapper[4689]: E1210 12:18:01.087993 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:18:01.587963238 +0000 UTC m=+149.376044376 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.094158 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-km6qw" event={"ID":"3d09bb78-b81a-4558-8433-192e7fc846df","Type":"ContainerStarted","Data":"c05dd7c682a2a80c9d9dc8c3546f68fb3e4205ff3835549ba1bdbac706dfdc73"} Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.094201 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-km6qw" event={"ID":"3d09bb78-b81a-4558-8433-192e7fc846df","Type":"ContainerStarted","Data":"ec3617fcfa433e3348642ec7ac47342ed99278016e5a5a85966488b5b2cab102"} Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.094948 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-km6qw" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.098366 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9wlmb" event={"ID":"88d80bbe-a9ab-4c91-b0b2-485e106dd150","Type":"ContainerStarted","Data":"959b0b1179d522d266cd5353b9313a9bb5dd0f748cb8f893bc5802f46f7896b8"} Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.098409 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9wlmb" event={"ID":"88d80bbe-a9ab-4c91-b0b2-485e106dd150","Type":"ContainerStarted","Data":"819a8a68360e50fdd9602a6f72bb09c32c287cc688df27c76a6cea175d88e616"} Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.099105 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-9wlmb" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.099466 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ks26t"] Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.113535 4689 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9wlmb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.113601 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9wlmb" podUID="88d80bbe-a9ab-4c91-b0b2-485e106dd150" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.121258 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9rk82" event={"ID":"7a4e09c2-7d4d-4078-b9a9-353f4c8063a4","Type":"ContainerStarted","Data":"4ceb144c7d9aa420999086398ea5f4d0d2e308926c4f4f76948c0cc9a56c8090"} Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.121301 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9rk82" event={"ID":"7a4e09c2-7d4d-4078-b9a9-353f4c8063a4","Type":"ContainerStarted","Data":"26ef6a461f3fdcc2e67b76eb49a158ea50a71797cacc3f659bf1b9a3fa97ae6c"} Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.147191 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7v6dg" event={"ID":"4ec880c8-61ec-4f66-976e-b656d28308a9","Type":"ContainerStarted","Data":"ebb4a0e76550d207df455d3faefbdb13ac8476f855d10a07b9961b089a8c67ac"} Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.152787 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pks9r" podStartSLOduration=120.152776297 podStartE2EDuration="2m0.152776297s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:01.152249344 +0000 UTC m=+148.940330482" watchObservedRunningTime="2025-12-10 12:18:01.152776297 +0000 UTC m=+148.940857435" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.154600 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nqgsm" podStartSLOduration=120.154596001 podStartE2EDuration="2m0.154596001s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:01.130800434 +0000 UTC m=+148.918881572" watchObservedRunningTime="2025-12-10 12:18:01.154596001 +0000 UTC m=+148.942677139" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.177474 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz4nm" event={"ID":"b81b90f7-72e4-4a8d-881f-d117310a4a25","Type":"ContainerStarted","Data":"d333a9efd2fc930e00b4686616abb2dfd64f61e62bfd1ffec35ed75f713599fd"} Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.178082 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz4nm" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.189682 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bccn\" (UniqueName: \"kubernetes.io/projected/c8d409aa-8f6d-4ed5-816c-e572e371d425-kube-api-access-9bccn\") pod \"certified-operators-ks26t\" (UID: \"c8d409aa-8f6d-4ed5-816c-e572e371d425\") " pod="openshift-marketplace/certified-operators-ks26t" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.189741 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.189769 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8d409aa-8f6d-4ed5-816c-e572e371d425-utilities\") pod \"certified-operators-ks26t\" (UID: \"c8d409aa-8f6d-4ed5-816c-e572e371d425\") " pod="openshift-marketplace/certified-operators-ks26t" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.189805 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8d409aa-8f6d-4ed5-816c-e572e371d425-catalog-content\") pod \"certified-operators-ks26t\" (UID: \"c8d409aa-8f6d-4ed5-816c-e572e371d425\") " pod="openshift-marketplace/certified-operators-ks26t" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.190162 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8d409aa-8f6d-4ed5-816c-e572e371d425-catalog-content\") pod \"certified-operators-ks26t\" (UID: \"c8d409aa-8f6d-4ed5-816c-e572e371d425\") " pod="openshift-marketplace/certified-operators-ks26t" Dec 10 12:18:01 crc kubenswrapper[4689]: E1210 12:18:01.190879 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:18:01.690869966 +0000 UTC m=+149.478951104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.191269 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8d409aa-8f6d-4ed5-816c-e572e371d425-utilities\") pod \"certified-operators-ks26t\" (UID: \"c8d409aa-8f6d-4ed5-816c-e572e371d425\") " pod="openshift-marketplace/certified-operators-ks26t" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.199567 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-km6qw" podStartSLOduration=120.19954954 podStartE2EDuration="2m0.19954954s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:01.181933976 +0000 UTC m=+148.970015114" watchObservedRunningTime="2025-12-10 12:18:01.19954954 +0000 UTC m=+148.987630678" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.200891 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tcsl7" event={"ID":"117123b0-4673-4a20-8e0c-7bca235e3168","Type":"ContainerStarted","Data":"5266af928c6936f6d0fc48af8afa5adc5513a2deaeee12f01b01fa4ab45043c3"} Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.200934 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tcsl7" event={"ID":"117123b0-4673-4a20-8e0c-7bca235e3168","Type":"ContainerStarted","Data":"40863ce8912d29b45c17ff2bdec08922ebef9e2c768bb53427f5f268d2f5ee00"} Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.221118 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tzqjc" event={"ID":"5c541a72-1484-41a6-b251-748306e1068d","Type":"ContainerStarted","Data":"57384c63f6034beb107857476e02e5f97e16bd8c2997ad254628874ad1185ea6"} Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.226963 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rkh5c"] Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.228607 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rkh5c" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.237905 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-9wlmb" podStartSLOduration=120.237890167 podStartE2EDuration="2m0.237890167s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:01.235413766 +0000 UTC m=+149.023494904" watchObservedRunningTime="2025-12-10 12:18:01.237890167 +0000 UTC m=+149.025971315" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.238704 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.242216 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbqzn" event={"ID":"be303c23-fd28-48b4-9463-65c5167285fe","Type":"ContainerStarted","Data":"466fc1bb18973363ed64e78aa6ba47d59a4d535c32cc863afdc5336474fd79c6"} Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.242246 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbqzn" event={"ID":"be303c23-fd28-48b4-9463-65c5167285fe","Type":"ContainerStarted","Data":"41cafd1d787fb6c385d10f923c676a9a35ba81f2bd00b3722864e6bbe3b2a1b9"} Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.242835 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bccn\" (UniqueName: \"kubernetes.io/projected/c8d409aa-8f6d-4ed5-816c-e572e371d425-kube-api-access-9bccn\") pod \"certified-operators-ks26t\" (UID: \"c8d409aa-8f6d-4ed5-816c-e572e371d425\") " pod="openshift-marketplace/certified-operators-ks26t" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.266223 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rkh5c"] Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.267927 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tcsl7" podStartSLOduration=120.267911357 podStartE2EDuration="2m0.267911357s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:01.266360319 +0000 UTC m=+149.054441457" watchObservedRunningTime="2025-12-10 12:18:01.267911357 +0000 UTC m=+149.055992495" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.279202 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gz5pm" event={"ID":"56120454-e6eb-4d76-92ee-b8083c027c12","Type":"ContainerStarted","Data":"a13592ae933527dbdfbeefa7986ee6fee090e5ee1446824582070a08652619d8"} Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.279243 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gz5pm" event={"ID":"56120454-e6eb-4d76-92ee-b8083c027c12","Type":"ContainerStarted","Data":"4708c1d0002ea1cb76c59d00c32707bb25a221d780037384131eda12699d2dc5"} Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.279254 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gz5pm" event={"ID":"56120454-e6eb-4d76-92ee-b8083c027c12","Type":"ContainerStarted","Data":"8806b89e2f88f09f78500784e06de243850fd52bbcf3320d543a51c0eeec93a2"} Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.291737 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.291924 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d51b889b-7485-4c32-84de-3ddfd7ce23e9-utilities\") pod \"community-operators-rkh5c\" (UID: \"d51b889b-7485-4c32-84de-3ddfd7ce23e9\") " pod="openshift-marketplace/community-operators-rkh5c" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.292024 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d51b889b-7485-4c32-84de-3ddfd7ce23e9-catalog-content\") pod \"community-operators-rkh5c\" (UID: \"d51b889b-7485-4c32-84de-3ddfd7ce23e9\") " pod="openshift-marketplace/community-operators-rkh5c" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.292115 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sjfd\" (UniqueName: \"kubernetes.io/projected/d51b889b-7485-4c32-84de-3ddfd7ce23e9-kube-api-access-5sjfd\") pod \"community-operators-rkh5c\" (UID: \"d51b889b-7485-4c32-84de-3ddfd7ce23e9\") " pod="openshift-marketplace/community-operators-rkh5c" Dec 10 12:18:01 crc kubenswrapper[4689]: E1210 12:18:01.292223 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:18:01.792208278 +0000 UTC m=+149.580289416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.294285 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25n7p" event={"ID":"17db82f9-9c6e-492e-8163-958debfe5437","Type":"ContainerStarted","Data":"5359e74417a7ea3375331194f371901b5fb73a56b706d3d23fde7645b55281ec"} Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.295174 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25n7p" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.308314 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dpmvq" event={"ID":"1a410bee-f5f8-4ab2-9750-3afed0e312b2","Type":"ContainerStarted","Data":"b833250d06d42eb14f7694e7cc031547e359918c8504cdf9de02e9e542dccd2f"} Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.308355 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dpmvq" event={"ID":"1a410bee-f5f8-4ab2-9750-3afed0e312b2","Type":"ContainerStarted","Data":"c137c01814b5c0b794129c8dfeb0020995295303b66a28496466ba3947d87191"} Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.308367 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dpmvq" event={"ID":"1a410bee-f5f8-4ab2-9750-3afed0e312b2","Type":"ContainerStarted","Data":"9ceeb1c97d4813676a30c0a71d4083006801de0faa1fa6784eaf95efa5967f47"} Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.310458 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25n7p" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.318498 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz4nm" podStartSLOduration=120.318484985 podStartE2EDuration="2m0.318484985s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:01.29637647 +0000 UTC m=+149.084457608" watchObservedRunningTime="2025-12-10 12:18:01.318484985 +0000 UTC m=+149.106566123" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.318869 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xmwb5" event={"ID":"265037d4-b03f-4d1b-a643-5b5eb4e59738","Type":"ContainerStarted","Data":"9fa9fcf3c9b8fcb56b2d6916d82b8ebe8f1cc4bad3c72a69ddb24f47b6cc5aab"} Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.333982 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" event={"ID":"d9195e8d-4ad5-406b-b2fe-1c107df51433","Type":"ContainerStarted","Data":"7c7d293f819c1507f6c95a1c653a46b842eef68062dc6e3115df38137300d3a9"} Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.343287 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dpmvq" podStartSLOduration=120.343275837 podStartE2EDuration="2m0.343275837s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:01.342342674 +0000 UTC m=+149.130423812" watchObservedRunningTime="2025-12-10 12:18:01.343275837 +0000 UTC m=+149.131356965" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.344205 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lkfnf" event={"ID":"c7fe8224-0ecc-4289-9a5d-107f9d6d00b5","Type":"ContainerStarted","Data":"86f29341200a276be954084fbfbf83a53c00509ee7542b8918df2d00a82dd8a6"} Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.344527 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-kpp45 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.344561 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kpp45" podUID="cd45363e-de7e-4e91-afe1-f82948764f4f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.373423 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ks26t" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.381920 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25n7p" podStartSLOduration=120.381907451 podStartE2EDuration="2m0.381907451s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:01.381116541 +0000 UTC m=+149.169197669" watchObservedRunningTime="2025-12-10 12:18:01.381907451 +0000 UTC m=+149.169988579" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.401992 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gz5pm" podStartSLOduration=120.401964405 podStartE2EDuration="2m0.401964405s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:01.401172365 +0000 UTC m=+149.189253503" watchObservedRunningTime="2025-12-10 12:18:01.401964405 +0000 UTC m=+149.190045543" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.402686 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d51b889b-7485-4c32-84de-3ddfd7ce23e9-catalog-content\") pod \"community-operators-rkh5c\" (UID: \"d51b889b-7485-4c32-84de-3ddfd7ce23e9\") " pod="openshift-marketplace/community-operators-rkh5c" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.402758 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.402791 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sjfd\" (UniqueName: \"kubernetes.io/projected/d51b889b-7485-4c32-84de-3ddfd7ce23e9-kube-api-access-5sjfd\") pod \"community-operators-rkh5c\" (UID: \"d51b889b-7485-4c32-84de-3ddfd7ce23e9\") " pod="openshift-marketplace/community-operators-rkh5c" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.402831 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d51b889b-7485-4c32-84de-3ddfd7ce23e9-utilities\") pod \"community-operators-rkh5c\" (UID: \"d51b889b-7485-4c32-84de-3ddfd7ce23e9\") " pod="openshift-marketplace/community-operators-rkh5c" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.403306 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d51b889b-7485-4c32-84de-3ddfd7ce23e9-utilities\") pod \"community-operators-rkh5c\" (UID: \"d51b889b-7485-4c32-84de-3ddfd7ce23e9\") " pod="openshift-marketplace/community-operators-rkh5c" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.403448 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d51b889b-7485-4c32-84de-3ddfd7ce23e9-catalog-content\") pod \"community-operators-rkh5c\" (UID: \"d51b889b-7485-4c32-84de-3ddfd7ce23e9\") " pod="openshift-marketplace/community-operators-rkh5c" Dec 10 12:18:01 crc kubenswrapper[4689]: E1210 12:18:01.403560 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:18:01.903548014 +0000 UTC m=+149.691629152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.425056 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-85rm9"] Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.426472 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-85rm9" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.426754 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sjfd\" (UniqueName: \"kubernetes.io/projected/d51b889b-7485-4c32-84de-3ddfd7ce23e9-kube-api-access-5sjfd\") pod \"community-operators-rkh5c\" (UID: \"d51b889b-7485-4c32-84de-3ddfd7ce23e9\") " pod="openshift-marketplace/community-operators-rkh5c" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.458437 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-85rm9"] Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.458908 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" podStartSLOduration=120.458891971 podStartE2EDuration="2m0.458891971s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:01.454656956 +0000 UTC m=+149.242738094" watchObservedRunningTime="2025-12-10 12:18:01.458891971 +0000 UTC m=+149.246973109" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.503847 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:18:01 crc kubenswrapper[4689]: E1210 12:18:01.505542 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:18:02.005523021 +0000 UTC m=+149.793604169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.571487 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rkh5c" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.607938 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d87c861-5bb2-4f79-8959-b2cebc28156d-catalog-content\") pod \"certified-operators-85rm9\" (UID: \"2d87c861-5bb2-4f79-8959-b2cebc28156d\") " pod="openshift-marketplace/certified-operators-85rm9" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.608064 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tknxs\" (UniqueName: \"kubernetes.io/projected/2d87c861-5bb2-4f79-8959-b2cebc28156d-kube-api-access-tknxs\") pod \"certified-operators-85rm9\" (UID: \"2d87c861-5bb2-4f79-8959-b2cebc28156d\") " pod="openshift-marketplace/certified-operators-85rm9" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.608135 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.608163 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d87c861-5bb2-4f79-8959-b2cebc28156d-utilities\") pod \"certified-operators-85rm9\" (UID: \"2d87c861-5bb2-4f79-8959-b2cebc28156d\") " pod="openshift-marketplace/certified-operators-85rm9" Dec 10 12:18:01 crc kubenswrapper[4689]: E1210 12:18:01.608469 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:18:02.10845447 +0000 UTC m=+149.896535608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.643023 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p7d6k"] Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.648275 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7d6k" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.686047 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p7d6k"] Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.715059 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.715177 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6304a1d-54e7-4f3b-878d-456b88936891-utilities\") pod \"community-operators-p7d6k\" (UID: \"e6304a1d-54e7-4f3b-878d-456b88936891\") " pod="openshift-marketplace/community-operators-p7d6k" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.715205 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2xk6\" (UniqueName: \"kubernetes.io/projected/e6304a1d-54e7-4f3b-878d-456b88936891-kube-api-access-t2xk6\") pod \"community-operators-p7d6k\" (UID: \"e6304a1d-54e7-4f3b-878d-456b88936891\") " pod="openshift-marketplace/community-operators-p7d6k" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.715229 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d87c861-5bb2-4f79-8959-b2cebc28156d-utilities\") pod \"certified-operators-85rm9\" (UID: \"2d87c861-5bb2-4f79-8959-b2cebc28156d\") " pod="openshift-marketplace/certified-operators-85rm9" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.715267 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d87c861-5bb2-4f79-8959-b2cebc28156d-catalog-content\") pod \"certified-operators-85rm9\" (UID: \"2d87c861-5bb2-4f79-8959-b2cebc28156d\") " pod="openshift-marketplace/certified-operators-85rm9" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.715307 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tknxs\" (UniqueName: \"kubernetes.io/projected/2d87c861-5bb2-4f79-8959-b2cebc28156d-kube-api-access-tknxs\") pod \"certified-operators-85rm9\" (UID: \"2d87c861-5bb2-4f79-8959-b2cebc28156d\") " pod="openshift-marketplace/certified-operators-85rm9" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.715327 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6304a1d-54e7-4f3b-878d-456b88936891-catalog-content\") pod \"community-operators-p7d6k\" (UID: \"e6304a1d-54e7-4f3b-878d-456b88936891\") " pod="openshift-marketplace/community-operators-p7d6k" Dec 10 12:18:01 crc kubenswrapper[4689]: E1210 12:18:01.715448 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:18:02.215433101 +0000 UTC m=+150.003514239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.723542 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d87c861-5bb2-4f79-8959-b2cebc28156d-utilities\") pod \"certified-operators-85rm9\" (UID: \"2d87c861-5bb2-4f79-8959-b2cebc28156d\") " pod="openshift-marketplace/certified-operators-85rm9" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.724505 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d87c861-5bb2-4f79-8959-b2cebc28156d-catalog-content\") pod \"certified-operators-85rm9\" (UID: \"2d87c861-5bb2-4f79-8959-b2cebc28156d\") " pod="openshift-marketplace/certified-operators-85rm9" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.772272 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tknxs\" (UniqueName: \"kubernetes.io/projected/2d87c861-5bb2-4f79-8959-b2cebc28156d-kube-api-access-tknxs\") pod \"certified-operators-85rm9\" (UID: \"2d87c861-5bb2-4f79-8959-b2cebc28156d\") " pod="openshift-marketplace/certified-operators-85rm9" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.772483 4689 patch_prober.go:28] interesting pod/router-default-5444994796-xmwb5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 12:18:01 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 10 12:18:01 crc kubenswrapper[4689]: [+]process-running ok Dec 10 12:18:01 crc kubenswrapper[4689]: healthz check failed Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.772521 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xmwb5" podUID="265037d4-b03f-4d1b-a643-5b5eb4e59738" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.816037 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6304a1d-54e7-4f3b-878d-456b88936891-catalog-content\") pod \"community-operators-p7d6k\" (UID: \"e6304a1d-54e7-4f3b-878d-456b88936891\") " pod="openshift-marketplace/community-operators-p7d6k" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.816367 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.816393 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6304a1d-54e7-4f3b-878d-456b88936891-utilities\") pod \"community-operators-p7d6k\" (UID: \"e6304a1d-54e7-4f3b-878d-456b88936891\") " pod="openshift-marketplace/community-operators-p7d6k" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.816411 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2xk6\" (UniqueName: \"kubernetes.io/projected/e6304a1d-54e7-4f3b-878d-456b88936891-kube-api-access-t2xk6\") pod \"community-operators-p7d6k\" (UID: \"e6304a1d-54e7-4f3b-878d-456b88936891\") " pod="openshift-marketplace/community-operators-p7d6k" Dec 10 12:18:01 crc kubenswrapper[4689]: E1210 12:18:01.816877 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:18:02.316866043 +0000 UTC m=+150.104947171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.817048 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6304a1d-54e7-4f3b-878d-456b88936891-catalog-content\") pod \"community-operators-p7d6k\" (UID: \"e6304a1d-54e7-4f3b-878d-456b88936891\") " pod="openshift-marketplace/community-operators-p7d6k" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.817135 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6304a1d-54e7-4f3b-878d-456b88936891-utilities\") pod \"community-operators-p7d6k\" (UID: \"e6304a1d-54e7-4f3b-878d-456b88936891\") " pod="openshift-marketplace/community-operators-p7d6k" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.863746 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2xk6\" (UniqueName: \"kubernetes.io/projected/e6304a1d-54e7-4f3b-878d-456b88936891-kube-api-access-t2xk6\") pod \"community-operators-p7d6k\" (UID: \"e6304a1d-54e7-4f3b-878d-456b88936891\") " pod="openshift-marketplace/community-operators-p7d6k" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.922677 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:18:01 crc kubenswrapper[4689]: E1210 12:18:01.923142 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:18:02.423123825 +0000 UTC m=+150.211204963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.985055 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.985097 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" Dec 10 12:18:01 crc kubenswrapper[4689]: I1210 12:18:01.987264 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7d6k" Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.037856 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:18:02 crc kubenswrapper[4689]: E1210 12:18:02.038626 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:18:02.538594135 +0000 UTC m=+150.326675273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.054742 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ks26t"] Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.071582 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-85rm9" Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.100147 4689 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-km6qw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.100221 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-km6qw" podUID="3d09bb78-b81a-4558-8433-192e7fc846df" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.140807 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:18:02 crc kubenswrapper[4689]: E1210 12:18:02.141171 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:18:02.641157335 +0000 UTC m=+150.429238473 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.217422 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.243616 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:18:02 crc kubenswrapper[4689]: E1210 12:18:02.244157 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:18:02.744145266 +0000 UTC m=+150.532226404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.347666 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:18:02 crc kubenswrapper[4689]: E1210 12:18:02.347984 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:18:02.847954099 +0000 UTC m=+150.636035237 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.361243 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-8s58v" event={"ID":"d5c163ba-fbd3-49b8-8b2f-ad0de7210e6f","Type":"ContainerStarted","Data":"81836398f1e35371707ed838e9c8dbc19733a5838ac339a9e81870ff455bdcaa"} Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.369009 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7v6dg" event={"ID":"4ec880c8-61ec-4f66-976e-b656d28308a9","Type":"ContainerStarted","Data":"9f3db1bfb53ea59a94fa613dd68afd78d3396479d74a10ac1f5c91b0c4ea2c6b"} Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.419467 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tzqjc" event={"ID":"5c541a72-1484-41a6-b251-748306e1068d","Type":"ContainerStarted","Data":"a8d2270c183f8dc57ce8849a5765b5cb27a1e339fe0b157dc263caa871068af1"} Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.420249 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-tzqjc" Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.437715 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7v6dg" podStartSLOduration=121.437695943 podStartE2EDuration="2m1.437695943s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:02.413321601 +0000 UTC m=+150.201402739" watchObservedRunningTime="2025-12-10 12:18:02.437695943 +0000 UTC m=+150.225777081" Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.438352 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rkh5c"] Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.450350 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:18:02 crc kubenswrapper[4689]: E1210 12:18:02.451289 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:18:02.951274227 +0000 UTC m=+150.739355365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.479227 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qc8tj" event={"ID":"e9883663-c0ff-4454-ba8f-caed05881365","Type":"ContainerStarted","Data":"1160c3b1d2d792f3dcb90ef3d792c1ecdfb6fff4de7ce208e2c9cdbc6b973a0a"} Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.517807 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-tzqjc" podStartSLOduration=8.517788939 podStartE2EDuration="8.517788939s" podCreationTimestamp="2025-12-10 12:17:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:02.464260308 +0000 UTC m=+150.252341446" watchObservedRunningTime="2025-12-10 12:18:02.517788939 +0000 UTC m=+150.305870077" Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.518218 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbqzn" event={"ID":"be303c23-fd28-48b4-9463-65c5167285fe","Type":"ContainerStarted","Data":"b58c54319570e216aeabc732b48dc11a395922c49123ad76311084f357208fb9"} Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.518835 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbqzn" Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.551719 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.552052 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lkfnf" event={"ID":"c7fe8224-0ecc-4289-9a5d-107f9d6d00b5","Type":"ContainerStarted","Data":"7a3264ae8614e885a9fcaab319db1cc2c6c13c0dd8bfe720e9f066de5099afdf"} Dec 10 12:18:02 crc kubenswrapper[4689]: E1210 12:18:02.552668 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:18:03.052651859 +0000 UTC m=+150.840732997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.590575 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-qc8tj" podStartSLOduration=121.590556915 podStartE2EDuration="2m1.590556915s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:02.519134372 +0000 UTC m=+150.307215510" watchObservedRunningTime="2025-12-10 12:18:02.590556915 +0000 UTC m=+150.378638053" Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.591052 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbqzn" podStartSLOduration=121.591047876 podStartE2EDuration="2m1.591047876s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:02.589390306 +0000 UTC m=+150.377471434" watchObservedRunningTime="2025-12-10 12:18:02.591047876 +0000 UTC m=+150.379129004" Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.591236 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ks26t" event={"ID":"c8d409aa-8f6d-4ed5-816c-e572e371d425","Type":"ContainerStarted","Data":"f5e09495901baaa3f79dc6dadbb3de39ba949a1d04659b9e4b8f36b6e25185bf"} Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.632097 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9rk82" event={"ID":"7a4e09c2-7d4d-4078-b9a9-353f4c8063a4","Type":"ContainerStarted","Data":"c29da3452a5f5707c58b9e5041ce0d6d3cd3bcfd0aa183630b5e1b6a84c25110"} Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.654507 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:18:02 crc kubenswrapper[4689]: E1210 12:18:02.655732 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:18:03.155717802 +0000 UTC m=+150.943798940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.659513 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g8snw" event={"ID":"7dbd6f55-bc16-49a6-b385-1ba0b50003cd","Type":"ContainerStarted","Data":"d49896d0022e1814c9184b52f78cdfa7137af7453d9ad54e76734409ae980470"} Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.659894 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lkfnf" podStartSLOduration=121.659872334 podStartE2EDuration="2m1.659872334s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:02.648280429 +0000 UTC m=+150.436361567" watchObservedRunningTime="2025-12-10 12:18:02.659872334 +0000 UTC m=+150.447953472" Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.676668 4689 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9wlmb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.676719 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9wlmb" podUID="88d80bbe-a9ab-4c91-b0b2-485e106dd150" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.691876 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdqrt" Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.692027 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-km6qw" Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.718393 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-69gng" Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.759170 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:18:02 crc kubenswrapper[4689]: E1210 12:18:02.760101 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:18:03.260080998 +0000 UTC m=+151.048162136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.761532 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-85rm9"] Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.832127 4689 patch_prober.go:28] interesting pod/router-default-5444994796-xmwb5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 12:18:02 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 10 12:18:02 crc kubenswrapper[4689]: [+]process-running ok Dec 10 12:18:02 crc kubenswrapper[4689]: healthz check failed Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.832171 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xmwb5" podUID="265037d4-b03f-4d1b-a643-5b5eb4e59738" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.861527 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:18:02 crc kubenswrapper[4689]: E1210 12:18:02.862045 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:18:03.362034223 +0000 UTC m=+151.150115361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.893179 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9rk82" podStartSLOduration=121.893154111 podStartE2EDuration="2m1.893154111s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:02.883068013 +0000 UTC m=+150.671149151" watchObservedRunningTime="2025-12-10 12:18:02.893154111 +0000 UTC m=+150.681235249" Dec 10 12:18:02 crc kubenswrapper[4689]: I1210 12:18:02.986035 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:18:02 crc kubenswrapper[4689]: E1210 12:18:02.987296 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:18:03.487275043 +0000 UTC m=+151.275356181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.074011 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6whbg"] Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.080297 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6whbg" Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.094689 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.095776 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23db100b-85ac-48e2-834b-741c9d94cf8f-utilities\") pod \"redhat-marketplace-6whbg\" (UID: \"23db100b-85ac-48e2-834b-741c9d94cf8f\") " pod="openshift-marketplace/redhat-marketplace-6whbg" Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.095887 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdscs\" (UniqueName: \"kubernetes.io/projected/23db100b-85ac-48e2-834b-741c9d94cf8f-kube-api-access-tdscs\") pod \"redhat-marketplace-6whbg\" (UID: \"23db100b-85ac-48e2-834b-741c9d94cf8f\") " pod="openshift-marketplace/redhat-marketplace-6whbg" Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.095919 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23db100b-85ac-48e2-834b-741c9d94cf8f-catalog-content\") pod \"redhat-marketplace-6whbg\" (UID: \"23db100b-85ac-48e2-834b-741c9d94cf8f\") " pod="openshift-marketplace/redhat-marketplace-6whbg" Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.095961 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:18:03 crc kubenswrapper[4689]: E1210 12:18:03.096229 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:18:03.596216502 +0000 UTC m=+151.384297640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.105044 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6whbg"] Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.146910 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-g8snw" podStartSLOduration=123.146894262 podStartE2EDuration="2m3.146894262s" podCreationTimestamp="2025-12-10 12:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:03.146288458 +0000 UTC m=+150.934369596" watchObservedRunningTime="2025-12-10 12:18:03.146894262 +0000 UTC m=+150.934975400" Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.198657 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.198863 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23db100b-85ac-48e2-834b-741c9d94cf8f-utilities\") pod \"redhat-marketplace-6whbg\" (UID: \"23db100b-85ac-48e2-834b-741c9d94cf8f\") " pod="openshift-marketplace/redhat-marketplace-6whbg" Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.198941 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdscs\" (UniqueName: \"kubernetes.io/projected/23db100b-85ac-48e2-834b-741c9d94cf8f-kube-api-access-tdscs\") pod \"redhat-marketplace-6whbg\" (UID: \"23db100b-85ac-48e2-834b-741c9d94cf8f\") " pod="openshift-marketplace/redhat-marketplace-6whbg" Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.198988 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23db100b-85ac-48e2-834b-741c9d94cf8f-catalog-content\") pod \"redhat-marketplace-6whbg\" (UID: \"23db100b-85ac-48e2-834b-741c9d94cf8f\") " pod="openshift-marketplace/redhat-marketplace-6whbg" Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.199373 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23db100b-85ac-48e2-834b-741c9d94cf8f-catalog-content\") pod \"redhat-marketplace-6whbg\" (UID: \"23db100b-85ac-48e2-834b-741c9d94cf8f\") " pod="openshift-marketplace/redhat-marketplace-6whbg" Dec 10 12:18:03 crc kubenswrapper[4689]: E1210 12:18:03.199768 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:18:03.699753697 +0000 UTC m=+151.487834835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.200004 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23db100b-85ac-48e2-834b-741c9d94cf8f-utilities\") pod \"redhat-marketplace-6whbg\" (UID: \"23db100b-85ac-48e2-834b-741c9d94cf8f\") " pod="openshift-marketplace/redhat-marketplace-6whbg" Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.264753 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdscs\" (UniqueName: \"kubernetes.io/projected/23db100b-85ac-48e2-834b-741c9d94cf8f-kube-api-access-tdscs\") pod \"redhat-marketplace-6whbg\" (UID: \"23db100b-85ac-48e2-834b-741c9d94cf8f\") " pod="openshift-marketplace/redhat-marketplace-6whbg" Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.301408 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:18:03 crc kubenswrapper[4689]: E1210 12:18:03.301893 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:18:03.801872227 +0000 UTC m=+151.589953435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.321515 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p7d6k"] Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.406920 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:18:03 crc kubenswrapper[4689]: E1210 12:18:03.407284 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:18:03.907268937 +0000 UTC m=+151.695350075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.439066 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-727gf"] Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.439670 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6whbg" Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.440100 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-727gf" Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.458405 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-727gf"] Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.508988 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd6edc95-9b5d-4438-b427-fd07f62090b7-catalog-content\") pod \"redhat-marketplace-727gf\" (UID: \"fd6edc95-9b5d-4438-b427-fd07f62090b7\") " pod="openshift-marketplace/redhat-marketplace-727gf" Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.509033 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8scv\" (UniqueName: \"kubernetes.io/projected/fd6edc95-9b5d-4438-b427-fd07f62090b7-kube-api-access-c8scv\") pod \"redhat-marketplace-727gf\" (UID: \"fd6edc95-9b5d-4438-b427-fd07f62090b7\") " pod="openshift-marketplace/redhat-marketplace-727gf" Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.509065 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd6edc95-9b5d-4438-b427-fd07f62090b7-utilities\") pod \"redhat-marketplace-727gf\" (UID: \"fd6edc95-9b5d-4438-b427-fd07f62090b7\") " pod="openshift-marketplace/redhat-marketplace-727gf" Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.509147 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:18:03 crc kubenswrapper[4689]: E1210 12:18:03.509408 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:18:04.009396197 +0000 UTC m=+151.797477335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.610319 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:18:03 crc kubenswrapper[4689]: E1210 12:18:03.610454 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:18:04.11042108 +0000 UTC m=+151.898502208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.610741 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd6edc95-9b5d-4438-b427-fd07f62090b7-catalog-content\") pod \"redhat-marketplace-727gf\" (UID: \"fd6edc95-9b5d-4438-b427-fd07f62090b7\") " pod="openshift-marketplace/redhat-marketplace-727gf" Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.610796 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8scv\" (UniqueName: \"kubernetes.io/projected/fd6edc95-9b5d-4438-b427-fd07f62090b7-kube-api-access-c8scv\") pod \"redhat-marketplace-727gf\" (UID: \"fd6edc95-9b5d-4438-b427-fd07f62090b7\") " pod="openshift-marketplace/redhat-marketplace-727gf" Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.610843 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd6edc95-9b5d-4438-b427-fd07f62090b7-utilities\") pod \"redhat-marketplace-727gf\" (UID: \"fd6edc95-9b5d-4438-b427-fd07f62090b7\") " pod="openshift-marketplace/redhat-marketplace-727gf" Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.610955 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:18:03 crc kubenswrapper[4689]: E1210 12:18:03.611368 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:18:04.111361083 +0000 UTC m=+151.899442221 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.612361 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd6edc95-9b5d-4438-b427-fd07f62090b7-utilities\") pod \"redhat-marketplace-727gf\" (UID: \"fd6edc95-9b5d-4438-b427-fd07f62090b7\") " pod="openshift-marketplace/redhat-marketplace-727gf" Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.612557 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd6edc95-9b5d-4438-b427-fd07f62090b7-catalog-content\") pod \"redhat-marketplace-727gf\" (UID: \"fd6edc95-9b5d-4438-b427-fd07f62090b7\") " pod="openshift-marketplace/redhat-marketplace-727gf" Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.642904 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz4nm" Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.643833 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8scv\" (UniqueName: \"kubernetes.io/projected/fd6edc95-9b5d-4438-b427-fd07f62090b7-kube-api-access-c8scv\") pod \"redhat-marketplace-727gf\" (UID: \"fd6edc95-9b5d-4438-b427-fd07f62090b7\") " pod="openshift-marketplace/redhat-marketplace-727gf" Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.695486 4689 generic.go:334] "Generic (PLEG): container finished" podID="2d87c861-5bb2-4f79-8959-b2cebc28156d" containerID="46d2e3166a55adf8a416e4d84daf987c0a52189f2858d28a329073a726b583a4" exitCode=0 Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.695792 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85rm9" event={"ID":"2d87c861-5bb2-4f79-8959-b2cebc28156d","Type":"ContainerDied","Data":"46d2e3166a55adf8a416e4d84daf987c0a52189f2858d28a329073a726b583a4"} Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.695817 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85rm9" event={"ID":"2d87c861-5bb2-4f79-8959-b2cebc28156d","Type":"ContainerStarted","Data":"2686e129fd9e5dcc29247adfd1ed0f8778af60e5a30f76cf45f97ca92fb32b25"} Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.706085 4689 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.711742 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:18:03 crc kubenswrapper[4689]: E1210 12:18:03.712594 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:18:04.212580801 +0000 UTC m=+152.000661939 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.720521 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-8s58v" event={"ID":"d5c163ba-fbd3-49b8-8b2f-ad0de7210e6f","Type":"ContainerStarted","Data":"f427dfb947344c6f3cea6b1ef4a8c019c349d649a8828e0c8420587b6cd6629b"} Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.722780 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-d7pkc" event={"ID":"abad9838-aab6-4f7c-a042-83924f9e0809","Type":"ContainerStarted","Data":"634333e217c38a91d9c9cb8c28b23db31aa7e5c6bc3db76735973cb2c3ea8c7f"} Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.726586 4689 generic.go:334] "Generic (PLEG): container finished" podID="d51b889b-7485-4c32-84de-3ddfd7ce23e9" containerID="7434631ed32ff2547826b0b814e6ecf84c9617541576bbd66c3d9f6f5f5f9a80" exitCode=0 Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.726633 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkh5c" event={"ID":"d51b889b-7485-4c32-84de-3ddfd7ce23e9","Type":"ContainerDied","Data":"7434631ed32ff2547826b0b814e6ecf84c9617541576bbd66c3d9f6f5f5f9a80"} Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.726649 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkh5c" event={"ID":"d51b889b-7485-4c32-84de-3ddfd7ce23e9","Type":"ContainerStarted","Data":"8608b9c25eb95a7262721ca9581607fdc9cf66d8fc9da0d98031f73b537f5f10"} Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.729824 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7d6k" event={"ID":"e6304a1d-54e7-4f3b-878d-456b88936891","Type":"ContainerStarted","Data":"d26f31aa876ca991b82606f969cbc83aad05725b62f94c07406f7648cdc77aa9"} Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.732406 4689 generic.go:334] "Generic (PLEG): container finished" podID="c8d409aa-8f6d-4ed5-816c-e572e371d425" containerID="b0919d0932abaa3581310de974beb14f08c4f90cfd4dce78ac7beb7d5ee1ecfb" exitCode=0 Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.733123 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ks26t" event={"ID":"c8d409aa-8f6d-4ed5-816c-e572e371d425","Type":"ContainerDied","Data":"b0919d0932abaa3581310de974beb14f08c4f90cfd4dce78ac7beb7d5ee1ecfb"} Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.747147 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-9wlmb" Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.748383 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-8s58v" podStartSLOduration=122.748368664 podStartE2EDuration="2m2.748368664s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:03.74414992 +0000 UTC m=+151.532231058" watchObservedRunningTime="2025-12-10 12:18:03.748368664 +0000 UTC m=+151.536449802" Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.762375 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-727gf" Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.770159 4689 patch_prober.go:28] interesting pod/router-default-5444994796-xmwb5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 12:18:03 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 10 12:18:03 crc kubenswrapper[4689]: [+]process-running ok Dec 10 12:18:03 crc kubenswrapper[4689]: healthz check failed Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.770193 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xmwb5" podUID="265037d4-b03f-4d1b-a643-5b5eb4e59738" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.785798 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6whbg"] Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.813201 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:18:03 crc kubenswrapper[4689]: E1210 12:18:03.815470 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:18:04.31545405 +0000 UTC m=+152.103535188 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.916211 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:18:03 crc kubenswrapper[4689]: E1210 12:18:03.916347 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:18:04.416327369 +0000 UTC m=+152.204408507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.919121 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:18:03 crc kubenswrapper[4689]: E1210 12:18:03.919723 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 12:18:04.419706062 +0000 UTC m=+152.207787200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jw2qz" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:03 crc kubenswrapper[4689]: I1210 12:18:03.993574 4689 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.027691 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:18:04 crc kubenswrapper[4689]: E1210 12:18:04.028053 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 12:18:04.528039566 +0000 UTC m=+152.316120704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.031658 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-727gf"] Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.050287 4689 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-10T12:18:03.993601515Z","Handler":null,"Name":""} Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.102897 4689 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.102937 4689 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.129188 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.134504 4689 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.134563 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.158434 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jw2qz\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.211453 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9hpt8"] Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.212635 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9hpt8" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.214246 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.222622 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9hpt8"] Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.231573 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.231890 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ebd1829-eb7a-4128-ab9d-79a1f75fa39e-catalog-content\") pod \"redhat-operators-9hpt8\" (UID: \"2ebd1829-eb7a-4128-ab9d-79a1f75fa39e\") " pod="openshift-marketplace/redhat-operators-9hpt8" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.231995 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ebd1829-eb7a-4128-ab9d-79a1f75fa39e-utilities\") pod \"redhat-operators-9hpt8\" (UID: \"2ebd1829-eb7a-4128-ab9d-79a1f75fa39e\") " pod="openshift-marketplace/redhat-operators-9hpt8" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.232068 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlgk4\" (UniqueName: \"kubernetes.io/projected/2ebd1829-eb7a-4128-ab9d-79a1f75fa39e-kube-api-access-zlgk4\") pod \"redhat-operators-9hpt8\" (UID: \"2ebd1829-eb7a-4128-ab9d-79a1f75fa39e\") " pod="openshift-marketplace/redhat-operators-9hpt8" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.250324 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.317080 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.333855 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ebd1829-eb7a-4128-ab9d-79a1f75fa39e-catalog-content\") pod \"redhat-operators-9hpt8\" (UID: \"2ebd1829-eb7a-4128-ab9d-79a1f75fa39e\") " pod="openshift-marketplace/redhat-operators-9hpt8" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.333945 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ebd1829-eb7a-4128-ab9d-79a1f75fa39e-utilities\") pod \"redhat-operators-9hpt8\" (UID: \"2ebd1829-eb7a-4128-ab9d-79a1f75fa39e\") " pod="openshift-marketplace/redhat-operators-9hpt8" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.334015 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlgk4\" (UniqueName: \"kubernetes.io/projected/2ebd1829-eb7a-4128-ab9d-79a1f75fa39e-kube-api-access-zlgk4\") pod \"redhat-operators-9hpt8\" (UID: \"2ebd1829-eb7a-4128-ab9d-79a1f75fa39e\") " pod="openshift-marketplace/redhat-operators-9hpt8" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.334314 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ebd1829-eb7a-4128-ab9d-79a1f75fa39e-catalog-content\") pod \"redhat-operators-9hpt8\" (UID: \"2ebd1829-eb7a-4128-ab9d-79a1f75fa39e\") " pod="openshift-marketplace/redhat-operators-9hpt8" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.334378 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ebd1829-eb7a-4128-ab9d-79a1f75fa39e-utilities\") pod \"redhat-operators-9hpt8\" (UID: \"2ebd1829-eb7a-4128-ab9d-79a1f75fa39e\") " pod="openshift-marketplace/redhat-operators-9hpt8" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.354537 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlgk4\" (UniqueName: \"kubernetes.io/projected/2ebd1829-eb7a-4128-ab9d-79a1f75fa39e-kube-api-access-zlgk4\") pod \"redhat-operators-9hpt8\" (UID: \"2ebd1829-eb7a-4128-ab9d-79a1f75fa39e\") " pod="openshift-marketplace/redhat-operators-9hpt8" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.511910 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.512785 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jw2qz"] Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.603204 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9hpt8" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.623081 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-28k8w"] Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.625872 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-28k8w" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.641956 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-28k8w"] Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.740581 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6b2s\" (UniqueName: \"kubernetes.io/projected/f0bf5778-7d94-4e68-8e6f-308482f98351-kube-api-access-k6b2s\") pod \"redhat-operators-28k8w\" (UID: \"f0bf5778-7d94-4e68-8e6f-308482f98351\") " pod="openshift-marketplace/redhat-operators-28k8w" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.740901 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0bf5778-7d94-4e68-8e6f-308482f98351-catalog-content\") pod \"redhat-operators-28k8w\" (UID: \"f0bf5778-7d94-4e68-8e6f-308482f98351\") " pod="openshift-marketplace/redhat-operators-28k8w" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.740932 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0bf5778-7d94-4e68-8e6f-308482f98351-utilities\") pod \"redhat-operators-28k8w\" (UID: \"f0bf5778-7d94-4e68-8e6f-308482f98351\") " pod="openshift-marketplace/redhat-operators-28k8w" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.742225 4689 generic.go:334] "Generic (PLEG): container finished" podID="23db100b-85ac-48e2-834b-741c9d94cf8f" containerID="ebc9f213009dd45ec7e001774119fe6377e608b7b3e57e4c41451c4f74ecf141" exitCode=0 Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.742294 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6whbg" event={"ID":"23db100b-85ac-48e2-834b-741c9d94cf8f","Type":"ContainerDied","Data":"ebc9f213009dd45ec7e001774119fe6377e608b7b3e57e4c41451c4f74ecf141"} Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.742322 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6whbg" event={"ID":"23db100b-85ac-48e2-834b-741c9d94cf8f","Type":"ContainerStarted","Data":"76e5943bb26a60c123c65fd4d3aeb7b6a224343282216711b5c7ae184abd9906"} Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.745659 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-d7pkc" event={"ID":"abad9838-aab6-4f7c-a042-83924f9e0809","Type":"ContainerStarted","Data":"94243278c6013c8afaf77a1aba8efd4fe5473a54760ae57d6c68e5994725b0a7"} Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.745719 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-d7pkc" event={"ID":"abad9838-aab6-4f7c-a042-83924f9e0809","Type":"ContainerStarted","Data":"ae43307732f16899d1a9a244315c541dc88a4ee38cd1e1b37dfd6f77cac84dc7"} Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.753544 4689 generic.go:334] "Generic (PLEG): container finished" podID="e6304a1d-54e7-4f3b-878d-456b88936891" containerID="c7110b069837ad3480e89c64c7d9356e712a33d081c234ec5a5b0405961281d9" exitCode=0 Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.753608 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7d6k" event={"ID":"e6304a1d-54e7-4f3b-878d-456b88936891","Type":"ContainerDied","Data":"c7110b069837ad3480e89c64c7d9356e712a33d081c234ec5a5b0405961281d9"} Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.757404 4689 generic.go:334] "Generic (PLEG): container finished" podID="fd6edc95-9b5d-4438-b427-fd07f62090b7" containerID="00a72fb576e629a0d1dab2421b907c3e15d5728dfd433d6fe2f2f4e3299b9955" exitCode=0 Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.757624 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-727gf" event={"ID":"fd6edc95-9b5d-4438-b427-fd07f62090b7","Type":"ContainerDied","Data":"00a72fb576e629a0d1dab2421b907c3e15d5728dfd433d6fe2f2f4e3299b9955"} Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.757688 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-727gf" event={"ID":"fd6edc95-9b5d-4438-b427-fd07f62090b7","Type":"ContainerStarted","Data":"5ba89b7114a5390f038484104f82936b61dea0e0aa93fd26a81ad16ca06fc7aa"} Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.763165 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" event={"ID":"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb","Type":"ContainerStarted","Data":"4ddc971fc29a4fcac417581f06e1c93c14abd78908b22ecb1d9eae3ff9d0478b"} Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.763206 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" event={"ID":"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb","Type":"ContainerStarted","Data":"713cc4566af8a5d40c449ebfea8b7f99a6a9b661c21b5f9f4f2d120e8ad45e33"} Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.763802 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.772072 4689 patch_prober.go:28] interesting pod/router-default-5444994796-xmwb5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 12:18:04 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 10 12:18:04 crc kubenswrapper[4689]: [+]process-running ok Dec 10 12:18:04 crc kubenswrapper[4689]: healthz check failed Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.772257 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xmwb5" podUID="265037d4-b03f-4d1b-a643-5b5eb4e59738" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.779325 4689 generic.go:334] "Generic (PLEG): container finished" podID="5dca2df7-7c4c-40bb-8302-d6c089fd5486" containerID="d554ce9f44128013a4ba643a6b440f84c6a3b3b7ede2530107890b9be885a80b" exitCode=0 Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.780253 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422815-gckmk" event={"ID":"5dca2df7-7c4c-40bb-8302-d6c089fd5486","Type":"ContainerDied","Data":"d554ce9f44128013a4ba643a6b440f84c6a3b3b7ede2530107890b9be885a80b"} Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.811869 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" podStartSLOduration=123.811849096 podStartE2EDuration="2m3.811849096s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:04.790712754 +0000 UTC m=+152.578793892" watchObservedRunningTime="2025-12-10 12:18:04.811849096 +0000 UTC m=+152.599930234" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.824652 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9hpt8"] Dec 10 12:18:04 crc kubenswrapper[4689]: W1210 12:18:04.837993 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ebd1829_eb7a_4128_ab9d_79a1f75fa39e.slice/crio-8e634e111acae47c25577d040359103870e2f4d9c00bbe06c93c2732170af46d WatchSource:0}: Error finding container 8e634e111acae47c25577d040359103870e2f4d9c00bbe06c93c2732170af46d: Status 404 returned error can't find the container with id 8e634e111acae47c25577d040359103870e2f4d9c00bbe06c93c2732170af46d Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.841871 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-d7pkc" podStartSLOduration=10.841853356 podStartE2EDuration="10.841853356s" podCreationTimestamp="2025-12-10 12:17:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:04.83631486 +0000 UTC m=+152.624395998" watchObservedRunningTime="2025-12-10 12:18:04.841853356 +0000 UTC m=+152.629934494" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.842418 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6b2s\" (UniqueName: \"kubernetes.io/projected/f0bf5778-7d94-4e68-8e6f-308482f98351-kube-api-access-k6b2s\") pod \"redhat-operators-28k8w\" (UID: \"f0bf5778-7d94-4e68-8e6f-308482f98351\") " pod="openshift-marketplace/redhat-operators-28k8w" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.842820 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0bf5778-7d94-4e68-8e6f-308482f98351-catalog-content\") pod \"redhat-operators-28k8w\" (UID: \"f0bf5778-7d94-4e68-8e6f-308482f98351\") " pod="openshift-marketplace/redhat-operators-28k8w" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.842886 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0bf5778-7d94-4e68-8e6f-308482f98351-utilities\") pod \"redhat-operators-28k8w\" (UID: \"f0bf5778-7d94-4e68-8e6f-308482f98351\") " pod="openshift-marketplace/redhat-operators-28k8w" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.862459 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0bf5778-7d94-4e68-8e6f-308482f98351-utilities\") pod \"redhat-operators-28k8w\" (UID: \"f0bf5778-7d94-4e68-8e6f-308482f98351\") " pod="openshift-marketplace/redhat-operators-28k8w" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.869113 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0bf5778-7d94-4e68-8e6f-308482f98351-catalog-content\") pod \"redhat-operators-28k8w\" (UID: \"f0bf5778-7d94-4e68-8e6f-308482f98351\") " pod="openshift-marketplace/redhat-operators-28k8w" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.874551 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6b2s\" (UniqueName: \"kubernetes.io/projected/f0bf5778-7d94-4e68-8e6f-308482f98351-kube-api-access-k6b2s\") pod \"redhat-operators-28k8w\" (UID: \"f0bf5778-7d94-4e68-8e6f-308482f98351\") " pod="openshift-marketplace/redhat-operators-28k8w" Dec 10 12:18:04 crc kubenswrapper[4689]: I1210 12:18:04.962731 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-28k8w" Dec 10 12:18:05 crc kubenswrapper[4689]: I1210 12:18:05.178285 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-28k8w"] Dec 10 12:18:05 crc kubenswrapper[4689]: W1210 12:18:05.188342 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0bf5778_7d94_4e68_8e6f_308482f98351.slice/crio-fd1efacf0e69ab8ca75a3bdbfd33e7f0f44d97c5891445efb5ab99d0cde8ac7c WatchSource:0}: Error finding container fd1efacf0e69ab8ca75a3bdbfd33e7f0f44d97c5891445efb5ab99d0cde8ac7c: Status 404 returned error can't find the container with id fd1efacf0e69ab8ca75a3bdbfd33e7f0f44d97c5891445efb5ab99d0cde8ac7c Dec 10 12:18:05 crc kubenswrapper[4689]: I1210 12:18:05.377446 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:18:05 crc kubenswrapper[4689]: I1210 12:18:05.377551 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:18:05 crc kubenswrapper[4689]: I1210 12:18:05.378523 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:18:05 crc kubenswrapper[4689]: I1210 12:18:05.383110 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:18:05 crc kubenswrapper[4689]: I1210 12:18:05.444441 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 12:18:05 crc kubenswrapper[4689]: I1210 12:18:05.479693 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:18:05 crc kubenswrapper[4689]: I1210 12:18:05.479817 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:18:05 crc kubenswrapper[4689]: I1210 12:18:05.482665 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:18:05 crc kubenswrapper[4689]: I1210 12:18:05.483034 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:18:05 crc kubenswrapper[4689]: I1210 12:18:05.621396 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 12:18:05 crc kubenswrapper[4689]: I1210 12:18:05.643361 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:18:05 crc kubenswrapper[4689]: I1210 12:18:05.644126 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:18:05 crc kubenswrapper[4689]: I1210 12:18:05.651021 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:18:05 crc kubenswrapper[4689]: W1210 12:18:05.707263 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-a7a7ad9dde42cc91f4c8d37c75d9859da936700c2f0ec87a53ca966117920d20 WatchSource:0}: Error finding container a7a7ad9dde42cc91f4c8d37c75d9859da936700c2f0ec87a53ca966117920d20: Status 404 returned error can't find the container with id a7a7ad9dde42cc91f4c8d37c75d9859da936700c2f0ec87a53ca966117920d20 Dec 10 12:18:05 crc kubenswrapper[4689]: I1210 12:18:05.762706 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:18:05 crc kubenswrapper[4689]: I1210 12:18:05.795849 4689 patch_prober.go:28] interesting pod/router-default-5444994796-xmwb5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 12:18:05 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 10 12:18:05 crc kubenswrapper[4689]: [+]process-running ok Dec 10 12:18:05 crc kubenswrapper[4689]: healthz check failed Dec 10 12:18:05 crc kubenswrapper[4689]: I1210 12:18:05.796420 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xmwb5" podUID="265037d4-b03f-4d1b-a643-5b5eb4e59738" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 12:18:05 crc kubenswrapper[4689]: I1210 12:18:05.836559 4689 generic.go:334] "Generic (PLEG): container finished" podID="2ebd1829-eb7a-4128-ab9d-79a1f75fa39e" containerID="12d0bde844504c3fe38d57afd29b65a25f56fed6cc2e2a01594b03339a1713dc" exitCode=0 Dec 10 12:18:05 crc kubenswrapper[4689]: I1210 12:18:05.836615 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9hpt8" event={"ID":"2ebd1829-eb7a-4128-ab9d-79a1f75fa39e","Type":"ContainerDied","Data":"12d0bde844504c3fe38d57afd29b65a25f56fed6cc2e2a01594b03339a1713dc"} Dec 10 12:18:05 crc kubenswrapper[4689]: I1210 12:18:05.836640 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9hpt8" event={"ID":"2ebd1829-eb7a-4128-ab9d-79a1f75fa39e","Type":"ContainerStarted","Data":"8e634e111acae47c25577d040359103870e2f4d9c00bbe06c93c2732170af46d"} Dec 10 12:18:05 crc kubenswrapper[4689]: I1210 12:18:05.838694 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a7a7ad9dde42cc91f4c8d37c75d9859da936700c2f0ec87a53ca966117920d20"} Dec 10 12:18:05 crc kubenswrapper[4689]: I1210 12:18:05.848897 4689 generic.go:334] "Generic (PLEG): container finished" podID="f0bf5778-7d94-4e68-8e6f-308482f98351" containerID="4a7ec41f0d24401ab33f58f2bee07a5fa44879b82bb6209ca294b37a085c7748" exitCode=0 Dec 10 12:18:05 crc kubenswrapper[4689]: I1210 12:18:05.849744 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28k8w" event={"ID":"f0bf5778-7d94-4e68-8e6f-308482f98351","Type":"ContainerDied","Data":"4a7ec41f0d24401ab33f58f2bee07a5fa44879b82bb6209ca294b37a085c7748"} Dec 10 12:18:05 crc kubenswrapper[4689]: I1210 12:18:05.849773 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28k8w" event={"ID":"f0bf5778-7d94-4e68-8e6f-308482f98351","Type":"ContainerStarted","Data":"fd1efacf0e69ab8ca75a3bdbfd33e7f0f44d97c5891445efb5ab99d0cde8ac7c"} Dec 10 12:18:05 crc kubenswrapper[4689]: I1210 12:18:05.861822 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-g8snw" Dec 10 12:18:06 crc kubenswrapper[4689]: W1210 12:18:06.157656 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-5bd9898dbf860b5651b7e38407ec31c39849ee589d2d2f9ae9a453bb07c3b368 WatchSource:0}: Error finding container 5bd9898dbf860b5651b7e38407ec31c39849ee589d2d2f9ae9a453bb07c3b368: Status 404 returned error can't find the container with id 5bd9898dbf860b5651b7e38407ec31c39849ee589d2d2f9ae9a453bb07c3b368 Dec 10 12:18:06 crc kubenswrapper[4689]: I1210 12:18:06.233864 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 10 12:18:06 crc kubenswrapper[4689]: I1210 12:18:06.234471 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 12:18:06 crc kubenswrapper[4689]: I1210 12:18:06.236577 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 10 12:18:06 crc kubenswrapper[4689]: I1210 12:18:06.236839 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 10 12:18:06 crc kubenswrapper[4689]: I1210 12:18:06.293257 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 10 12:18:06 crc kubenswrapper[4689]: I1210 12:18:06.302616 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23c0dfdb-b6d3-4a3e-b34f-56056add247a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"23c0dfdb-b6d3-4a3e-b34f-56056add247a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 12:18:06 crc kubenswrapper[4689]: I1210 12:18:06.302678 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23c0dfdb-b6d3-4a3e-b34f-56056add247a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"23c0dfdb-b6d3-4a3e-b34f-56056add247a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 12:18:06 crc kubenswrapper[4689]: I1210 12:18:06.320373 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422815-gckmk" Dec 10 12:18:06 crc kubenswrapper[4689]: I1210 12:18:06.405064 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5dca2df7-7c4c-40bb-8302-d6c089fd5486-config-volume\") pod \"5dca2df7-7c4c-40bb-8302-d6c089fd5486\" (UID: \"5dca2df7-7c4c-40bb-8302-d6c089fd5486\") " Dec 10 12:18:06 crc kubenswrapper[4689]: I1210 12:18:06.405170 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czsbh\" (UniqueName: \"kubernetes.io/projected/5dca2df7-7c4c-40bb-8302-d6c089fd5486-kube-api-access-czsbh\") pod \"5dca2df7-7c4c-40bb-8302-d6c089fd5486\" (UID: \"5dca2df7-7c4c-40bb-8302-d6c089fd5486\") " Dec 10 12:18:06 crc kubenswrapper[4689]: I1210 12:18:06.405211 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5dca2df7-7c4c-40bb-8302-d6c089fd5486-secret-volume\") pod \"5dca2df7-7c4c-40bb-8302-d6c089fd5486\" (UID: \"5dca2df7-7c4c-40bb-8302-d6c089fd5486\") " Dec 10 12:18:06 crc kubenswrapper[4689]: I1210 12:18:06.405553 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23c0dfdb-b6d3-4a3e-b34f-56056add247a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"23c0dfdb-b6d3-4a3e-b34f-56056add247a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 12:18:06 crc kubenswrapper[4689]: I1210 12:18:06.405602 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23c0dfdb-b6d3-4a3e-b34f-56056add247a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"23c0dfdb-b6d3-4a3e-b34f-56056add247a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 12:18:06 crc kubenswrapper[4689]: I1210 12:18:06.405712 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23c0dfdb-b6d3-4a3e-b34f-56056add247a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"23c0dfdb-b6d3-4a3e-b34f-56056add247a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 12:18:06 crc kubenswrapper[4689]: I1210 12:18:06.408092 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dca2df7-7c4c-40bb-8302-d6c089fd5486-config-volume" (OuterVolumeSpecName: "config-volume") pod "5dca2df7-7c4c-40bb-8302-d6c089fd5486" (UID: "5dca2df7-7c4c-40bb-8302-d6c089fd5486"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:18:06 crc kubenswrapper[4689]: I1210 12:18:06.417157 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dca2df7-7c4c-40bb-8302-d6c089fd5486-kube-api-access-czsbh" (OuterVolumeSpecName: "kube-api-access-czsbh") pod "5dca2df7-7c4c-40bb-8302-d6c089fd5486" (UID: "5dca2df7-7c4c-40bb-8302-d6c089fd5486"). InnerVolumeSpecName "kube-api-access-czsbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:18:06 crc kubenswrapper[4689]: I1210 12:18:06.425960 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dca2df7-7c4c-40bb-8302-d6c089fd5486-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5dca2df7-7c4c-40bb-8302-d6c089fd5486" (UID: "5dca2df7-7c4c-40bb-8302-d6c089fd5486"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:18:06 crc kubenswrapper[4689]: I1210 12:18:06.440831 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23c0dfdb-b6d3-4a3e-b34f-56056add247a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"23c0dfdb-b6d3-4a3e-b34f-56056add247a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 12:18:06 crc kubenswrapper[4689]: I1210 12:18:06.506463 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czsbh\" (UniqueName: \"kubernetes.io/projected/5dca2df7-7c4c-40bb-8302-d6c089fd5486-kube-api-access-czsbh\") on node \"crc\" DevicePath \"\"" Dec 10 12:18:06 crc kubenswrapper[4689]: I1210 12:18:06.506498 4689 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5dca2df7-7c4c-40bb-8302-d6c089fd5486-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 10 12:18:06 crc kubenswrapper[4689]: I1210 12:18:06.506508 4689 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5dca2df7-7c4c-40bb-8302-d6c089fd5486-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 12:18:06 crc kubenswrapper[4689]: W1210 12:18:06.579797 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-13a235bd63071e84df7bca5495bafcf4e2061e303d2fa0f1df4be285852db724 WatchSource:0}: Error finding container 13a235bd63071e84df7bca5495bafcf4e2061e303d2fa0f1df4be285852db724: Status 404 returned error can't find the container with id 13a235bd63071e84df7bca5495bafcf4e2061e303d2fa0f1df4be285852db724 Dec 10 12:18:06 crc kubenswrapper[4689]: I1210 12:18:06.606876 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 12:18:06 crc kubenswrapper[4689]: I1210 12:18:06.649320 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:18:06 crc kubenswrapper[4689]: I1210 12:18:06.770236 4689 patch_prober.go:28] interesting pod/router-default-5444994796-xmwb5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 12:18:06 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 10 12:18:06 crc kubenswrapper[4689]: [+]process-running ok Dec 10 12:18:06 crc kubenswrapper[4689]: healthz check failed Dec 10 12:18:06 crc kubenswrapper[4689]: I1210 12:18:06.770285 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xmwb5" podUID="265037d4-b03f-4d1b-a643-5b5eb4e59738" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 12:18:06 crc kubenswrapper[4689]: I1210 12:18:06.873851 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2d0fa000d9e9b9875ff0e0e6511a49bfa27e079b7f55dea434f221d4381d22c3"} Dec 10 12:18:06 crc kubenswrapper[4689]: I1210 12:18:06.880044 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422815-gckmk" event={"ID":"5dca2df7-7c4c-40bb-8302-d6c089fd5486","Type":"ContainerDied","Data":"3cf05882f5625c66a53e8b93167ef8087f6183b91b00710a64b4105ffe095a53"} Dec 10 12:18:06 crc kubenswrapper[4689]: I1210 12:18:06.880072 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cf05882f5625c66a53e8b93167ef8087f6183b91b00710a64b4105ffe095a53" Dec 10 12:18:06 crc kubenswrapper[4689]: I1210 12:18:06.880117 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422815-gckmk" Dec 10 12:18:06 crc kubenswrapper[4689]: I1210 12:18:06.907323 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7686b5c4268b94414ac18348e00207e6c541732481104b9b4f0f0e290341fd35"} Dec 10 12:18:06 crc kubenswrapper[4689]: I1210 12:18:06.907371 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"5bd9898dbf860b5651b7e38407ec31c39849ee589d2d2f9ae9a453bb07c3b368"} Dec 10 12:18:06 crc kubenswrapper[4689]: I1210 12:18:06.916448 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"13a235bd63071e84df7bca5495bafcf4e2061e303d2fa0f1df4be285852db724"} Dec 10 12:18:07 crc kubenswrapper[4689]: I1210 12:18:07.061053 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-9s8ps" Dec 10 12:18:07 crc kubenswrapper[4689]: I1210 12:18:07.061273 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-kpp45" Dec 10 12:18:07 crc kubenswrapper[4689]: I1210 12:18:07.061901 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-9s8ps" Dec 10 12:18:07 crc kubenswrapper[4689]: I1210 12:18:07.066486 4689 patch_prober.go:28] interesting pod/console-f9d7485db-9s8ps container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Dec 10 12:18:07 crc kubenswrapper[4689]: I1210 12:18:07.066541 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-9s8ps" podUID="ee767cde-d698-4c01-b221-33c158999e60" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Dec 10 12:18:07 crc kubenswrapper[4689]: I1210 12:18:07.169443 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 12:18:07 crc kubenswrapper[4689]: I1210 12:18:07.169497 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 12:18:07 crc kubenswrapper[4689]: I1210 12:18:07.228297 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 10 12:18:07 crc kubenswrapper[4689]: W1210 12:18:07.237034 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod23c0dfdb_b6d3_4a3e_b34f_56056add247a.slice/crio-66be16ec1f652fd394da63c64ad626e742b920633f9aea79bf6b6b0d6a359763 WatchSource:0}: Error finding container 66be16ec1f652fd394da63c64ad626e742b920633f9aea79bf6b6b0d6a359763: Status 404 returned error can't find the container with id 66be16ec1f652fd394da63c64ad626e742b920633f9aea79bf6b6b0d6a359763 Dec 10 12:18:07 crc kubenswrapper[4689]: I1210 12:18:07.765761 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-xmwb5" Dec 10 12:18:07 crc kubenswrapper[4689]: I1210 12:18:07.770334 4689 patch_prober.go:28] interesting pod/router-default-5444994796-xmwb5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 12:18:07 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 10 12:18:07 crc kubenswrapper[4689]: [+]process-running ok Dec 10 12:18:07 crc kubenswrapper[4689]: healthz check failed Dec 10 12:18:07 crc kubenswrapper[4689]: I1210 12:18:07.770395 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xmwb5" podUID="265037d4-b03f-4d1b-a643-5b5eb4e59738" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 12:18:07 crc kubenswrapper[4689]: I1210 12:18:07.974164 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"23c0dfdb-b6d3-4a3e-b34f-56056add247a","Type":"ContainerStarted","Data":"dd0f2e18db97826c413d9347386528cfe96024a6c30caa8dbb8f49e9753933bf"} Dec 10 12:18:07 crc kubenswrapper[4689]: I1210 12:18:07.974208 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"23c0dfdb-b6d3-4a3e-b34f-56056add247a","Type":"ContainerStarted","Data":"66be16ec1f652fd394da63c64ad626e742b920633f9aea79bf6b6b0d6a359763"} Dec 10 12:18:07 crc kubenswrapper[4689]: I1210 12:18:07.986182 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"de263327be0c082b04a1c2ae8deec510c4b0b453aaa63809e7b836a37caedeca"} Dec 10 12:18:07 crc kubenswrapper[4689]: I1210 12:18:07.986871 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:18:07 crc kubenswrapper[4689]: I1210 12:18:07.990316 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.990307005 podStartE2EDuration="1.990307005s" podCreationTimestamp="2025-12-10 12:18:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:07.988961222 +0000 UTC m=+155.777042350" watchObservedRunningTime="2025-12-10 12:18:07.990307005 +0000 UTC m=+155.778388133" Dec 10 12:18:08 crc kubenswrapper[4689]: I1210 12:18:08.768796 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-xmwb5" Dec 10 12:18:08 crc kubenswrapper[4689]: I1210 12:18:08.770920 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-xmwb5" Dec 10 12:18:09 crc kubenswrapper[4689]: I1210 12:18:09.592320 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-tzqjc" Dec 10 12:18:09 crc kubenswrapper[4689]: I1210 12:18:09.997937 4689 generic.go:334] "Generic (PLEG): container finished" podID="23c0dfdb-b6d3-4a3e-b34f-56056add247a" containerID="dd0f2e18db97826c413d9347386528cfe96024a6c30caa8dbb8f49e9753933bf" exitCode=0 Dec 10 12:18:09 crc kubenswrapper[4689]: I1210 12:18:09.998031 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"23c0dfdb-b6d3-4a3e-b34f-56056add247a","Type":"ContainerDied","Data":"dd0f2e18db97826c413d9347386528cfe96024a6c30caa8dbb8f49e9753933bf"} Dec 10 12:18:10 crc kubenswrapper[4689]: I1210 12:18:10.541820 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 10 12:18:10 crc kubenswrapper[4689]: E1210 12:18:10.542314 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dca2df7-7c4c-40bb-8302-d6c089fd5486" containerName="collect-profiles" Dec 10 12:18:10 crc kubenswrapper[4689]: I1210 12:18:10.542326 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dca2df7-7c4c-40bb-8302-d6c089fd5486" containerName="collect-profiles" Dec 10 12:18:10 crc kubenswrapper[4689]: I1210 12:18:10.542443 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dca2df7-7c4c-40bb-8302-d6c089fd5486" containerName="collect-profiles" Dec 10 12:18:10 crc kubenswrapper[4689]: I1210 12:18:10.542824 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 12:18:10 crc kubenswrapper[4689]: I1210 12:18:10.544394 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 10 12:18:10 crc kubenswrapper[4689]: I1210 12:18:10.546047 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 10 12:18:10 crc kubenswrapper[4689]: I1210 12:18:10.547487 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 10 12:18:10 crc kubenswrapper[4689]: I1210 12:18:10.592989 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/466f9227-78bc-4750-b4ae-266a7c285a8a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"466f9227-78bc-4750-b4ae-266a7c285a8a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 12:18:10 crc kubenswrapper[4689]: I1210 12:18:10.593101 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/466f9227-78bc-4750-b4ae-266a7c285a8a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"466f9227-78bc-4750-b4ae-266a7c285a8a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 12:18:10 crc kubenswrapper[4689]: I1210 12:18:10.694390 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/466f9227-78bc-4750-b4ae-266a7c285a8a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"466f9227-78bc-4750-b4ae-266a7c285a8a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 12:18:10 crc kubenswrapper[4689]: I1210 12:18:10.694481 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/466f9227-78bc-4750-b4ae-266a7c285a8a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"466f9227-78bc-4750-b4ae-266a7c285a8a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 12:18:10 crc kubenswrapper[4689]: I1210 12:18:10.694580 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/466f9227-78bc-4750-b4ae-266a7c285a8a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"466f9227-78bc-4750-b4ae-266a7c285a8a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 12:18:10 crc kubenswrapper[4689]: I1210 12:18:10.733652 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/466f9227-78bc-4750-b4ae-266a7c285a8a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"466f9227-78bc-4750-b4ae-266a7c285a8a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 12:18:10 crc kubenswrapper[4689]: I1210 12:18:10.870593 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 12:18:11 crc kubenswrapper[4689]: I1210 12:18:11.261115 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 12:18:11 crc kubenswrapper[4689]: I1210 12:18:11.300545 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23c0dfdb-b6d3-4a3e-b34f-56056add247a-kube-api-access\") pod \"23c0dfdb-b6d3-4a3e-b34f-56056add247a\" (UID: \"23c0dfdb-b6d3-4a3e-b34f-56056add247a\") " Dec 10 12:18:11 crc kubenswrapper[4689]: I1210 12:18:11.300609 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23c0dfdb-b6d3-4a3e-b34f-56056add247a-kubelet-dir\") pod \"23c0dfdb-b6d3-4a3e-b34f-56056add247a\" (UID: \"23c0dfdb-b6d3-4a3e-b34f-56056add247a\") " Dec 10 12:18:11 crc kubenswrapper[4689]: I1210 12:18:11.300671 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23c0dfdb-b6d3-4a3e-b34f-56056add247a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "23c0dfdb-b6d3-4a3e-b34f-56056add247a" (UID: "23c0dfdb-b6d3-4a3e-b34f-56056add247a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:18:11 crc kubenswrapper[4689]: I1210 12:18:11.300943 4689 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23c0dfdb-b6d3-4a3e-b34f-56056add247a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 10 12:18:11 crc kubenswrapper[4689]: I1210 12:18:11.307043 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23c0dfdb-b6d3-4a3e-b34f-56056add247a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "23c0dfdb-b6d3-4a3e-b34f-56056add247a" (UID: "23c0dfdb-b6d3-4a3e-b34f-56056add247a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:18:11 crc kubenswrapper[4689]: I1210 12:18:11.402602 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23c0dfdb-b6d3-4a3e-b34f-56056add247a-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 12:18:11 crc kubenswrapper[4689]: I1210 12:18:11.436053 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 10 12:18:12 crc kubenswrapper[4689]: I1210 12:18:12.016513 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"23c0dfdb-b6d3-4a3e-b34f-56056add247a","Type":"ContainerDied","Data":"66be16ec1f652fd394da63c64ad626e742b920633f9aea79bf6b6b0d6a359763"} Dec 10 12:18:12 crc kubenswrapper[4689]: I1210 12:18:12.016877 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66be16ec1f652fd394da63c64ad626e742b920633f9aea79bf6b6b0d6a359763" Dec 10 12:18:12 crc kubenswrapper[4689]: I1210 12:18:12.016937 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 12:18:12 crc kubenswrapper[4689]: I1210 12:18:12.041603 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"466f9227-78bc-4750-b4ae-266a7c285a8a","Type":"ContainerStarted","Data":"a18160490ddfb912b7563227fb951792169f77c66cf6debe3414fd778849c05a"} Dec 10 12:18:13 crc kubenswrapper[4689]: I1210 12:18:13.048885 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"466f9227-78bc-4750-b4ae-266a7c285a8a","Type":"ContainerStarted","Data":"fa8c646555f3c5c308b2f693d02798e0c346f81febc759d319c8b7be987de759"} Dec 10 12:18:13 crc kubenswrapper[4689]: I1210 12:18:13.062587 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.062569296 podStartE2EDuration="3.062569296s" podCreationTimestamp="2025-12-10 12:18:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:13.060534576 +0000 UTC m=+160.848615714" watchObservedRunningTime="2025-12-10 12:18:13.062569296 +0000 UTC m=+160.850650434" Dec 10 12:18:14 crc kubenswrapper[4689]: I1210 12:18:14.071480 4689 generic.go:334] "Generic (PLEG): container finished" podID="466f9227-78bc-4750-b4ae-266a7c285a8a" containerID="fa8c646555f3c5c308b2f693d02798e0c346f81febc759d319c8b7be987de759" exitCode=0 Dec 10 12:18:14 crc kubenswrapper[4689]: I1210 12:18:14.071518 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"466f9227-78bc-4750-b4ae-266a7c285a8a","Type":"ContainerDied","Data":"fa8c646555f3c5c308b2f693d02798e0c346f81febc759d319c8b7be987de759"} Dec 10 12:18:17 crc kubenswrapper[4689]: I1210 12:18:17.148766 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-9s8ps" Dec 10 12:18:17 crc kubenswrapper[4689]: I1210 12:18:17.153264 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-9s8ps" Dec 10 12:18:17 crc kubenswrapper[4689]: I1210 12:18:17.465878 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 12:18:17 crc kubenswrapper[4689]: I1210 12:18:17.590574 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/466f9227-78bc-4750-b4ae-266a7c285a8a-kube-api-access\") pod \"466f9227-78bc-4750-b4ae-266a7c285a8a\" (UID: \"466f9227-78bc-4750-b4ae-266a7c285a8a\") " Dec 10 12:18:17 crc kubenswrapper[4689]: I1210 12:18:17.590629 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/466f9227-78bc-4750-b4ae-266a7c285a8a-kubelet-dir\") pod \"466f9227-78bc-4750-b4ae-266a7c285a8a\" (UID: \"466f9227-78bc-4750-b4ae-266a7c285a8a\") " Dec 10 12:18:17 crc kubenswrapper[4689]: I1210 12:18:17.590960 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/466f9227-78bc-4750-b4ae-266a7c285a8a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "466f9227-78bc-4750-b4ae-266a7c285a8a" (UID: "466f9227-78bc-4750-b4ae-266a7c285a8a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:18:17 crc kubenswrapper[4689]: I1210 12:18:17.597711 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/466f9227-78bc-4750-b4ae-266a7c285a8a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "466f9227-78bc-4750-b4ae-266a7c285a8a" (UID: "466f9227-78bc-4750-b4ae-266a7c285a8a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:18:17 crc kubenswrapper[4689]: I1210 12:18:17.692630 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/466f9227-78bc-4750-b4ae-266a7c285a8a-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 12:18:17 crc kubenswrapper[4689]: I1210 12:18:17.692672 4689 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/466f9227-78bc-4750-b4ae-266a7c285a8a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 10 12:18:18 crc kubenswrapper[4689]: I1210 12:18:18.095328 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 12:18:18 crc kubenswrapper[4689]: I1210 12:18:18.095330 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"466f9227-78bc-4750-b4ae-266a7c285a8a","Type":"ContainerDied","Data":"a18160490ddfb912b7563227fb951792169f77c66cf6debe3414fd778849c05a"} Dec 10 12:18:18 crc kubenswrapper[4689]: I1210 12:18:18.095524 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a18160490ddfb912b7563227fb951792169f77c66cf6debe3414fd778849c05a" Dec 10 12:18:23 crc kubenswrapper[4689]: I1210 12:18:23.802146 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8-metrics-certs\") pod \"network-metrics-daemon-2h8hs\" (UID: \"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8\") " pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:18:23 crc kubenswrapper[4689]: I1210 12:18:23.808500 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8-metrics-certs\") pod \"network-metrics-daemon-2h8hs\" (UID: \"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8\") " pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:18:24 crc kubenswrapper[4689]: I1210 12:18:24.024318 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2h8hs" Dec 10 12:18:24 crc kubenswrapper[4689]: I1210 12:18:24.327029 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:18:36 crc kubenswrapper[4689]: E1210 12:18:36.495606 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 10 12:18:36 crc kubenswrapper[4689]: E1210 12:18:36.496374 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdscs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-6whbg_openshift-marketplace(23db100b-85ac-48e2-834b-741c9d94cf8f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 12:18:36 crc kubenswrapper[4689]: E1210 12:18:36.498347 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-6whbg" podUID="23db100b-85ac-48e2-834b-741c9d94cf8f" Dec 10 12:18:37 crc kubenswrapper[4689]: I1210 12:18:37.166342 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 12:18:37 crc kubenswrapper[4689]: I1210 12:18:37.166644 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 12:18:37 crc kubenswrapper[4689]: I1210 12:18:37.791053 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbqzn" Dec 10 12:18:42 crc kubenswrapper[4689]: I1210 12:18:42.538177 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 10 12:18:42 crc kubenswrapper[4689]: E1210 12:18:42.538633 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23c0dfdb-b6d3-4a3e-b34f-56056add247a" containerName="pruner" Dec 10 12:18:42 crc kubenswrapper[4689]: I1210 12:18:42.538647 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="23c0dfdb-b6d3-4a3e-b34f-56056add247a" containerName="pruner" Dec 10 12:18:42 crc kubenswrapper[4689]: E1210 12:18:42.538675 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="466f9227-78bc-4750-b4ae-266a7c285a8a" containerName="pruner" Dec 10 12:18:42 crc kubenswrapper[4689]: I1210 12:18:42.538684 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="466f9227-78bc-4750-b4ae-266a7c285a8a" containerName="pruner" Dec 10 12:18:42 crc kubenswrapper[4689]: I1210 12:18:42.538805 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="466f9227-78bc-4750-b4ae-266a7c285a8a" containerName="pruner" Dec 10 12:18:42 crc kubenswrapper[4689]: I1210 12:18:42.538825 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="23c0dfdb-b6d3-4a3e-b34f-56056add247a" containerName="pruner" Dec 10 12:18:42 crc kubenswrapper[4689]: I1210 12:18:42.539230 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 12:18:42 crc kubenswrapper[4689]: I1210 12:18:42.543057 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 10 12:18:42 crc kubenswrapper[4689]: I1210 12:18:42.543850 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 10 12:18:42 crc kubenswrapper[4689]: I1210 12:18:42.561370 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 10 12:18:42 crc kubenswrapper[4689]: I1210 12:18:42.670857 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67253606-6335-4120-a1ac-557e53d8c470-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"67253606-6335-4120-a1ac-557e53d8c470\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 12:18:42 crc kubenswrapper[4689]: I1210 12:18:42.670938 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67253606-6335-4120-a1ac-557e53d8c470-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"67253606-6335-4120-a1ac-557e53d8c470\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 12:18:42 crc kubenswrapper[4689]: I1210 12:18:42.772190 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67253606-6335-4120-a1ac-557e53d8c470-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"67253606-6335-4120-a1ac-557e53d8c470\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 12:18:42 crc kubenswrapper[4689]: I1210 12:18:42.772299 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67253606-6335-4120-a1ac-557e53d8c470-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"67253606-6335-4120-a1ac-557e53d8c470\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 12:18:42 crc kubenswrapper[4689]: I1210 12:18:42.772467 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67253606-6335-4120-a1ac-557e53d8c470-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"67253606-6335-4120-a1ac-557e53d8c470\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 12:18:42 crc kubenswrapper[4689]: I1210 12:18:42.790021 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67253606-6335-4120-a1ac-557e53d8c470-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"67253606-6335-4120-a1ac-557e53d8c470\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 12:18:42 crc kubenswrapper[4689]: I1210 12:18:42.880413 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 12:18:43 crc kubenswrapper[4689]: E1210 12:18:43.493748 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-6whbg" podUID="23db100b-85ac-48e2-834b-741c9d94cf8f" Dec 10 12:18:44 crc kubenswrapper[4689]: E1210 12:18:44.116886 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 10 12:18:44 crc kubenswrapper[4689]: E1210 12:18:44.117039 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9bccn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ks26t_openshift-marketplace(c8d409aa-8f6d-4ed5-816c-e572e371d425): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 12:18:44 crc kubenswrapper[4689]: E1210 12:18:44.118268 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ks26t" podUID="c8d409aa-8f6d-4ed5-816c-e572e371d425" Dec 10 12:18:44 crc kubenswrapper[4689]: E1210 12:18:44.189352 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 10 12:18:44 crc kubenswrapper[4689]: E1210 12:18:44.189513 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tknxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-85rm9_openshift-marketplace(2d87c861-5bb2-4f79-8959-b2cebc28156d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 12:18:44 crc kubenswrapper[4689]: E1210 12:18:44.190703 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-85rm9" podUID="2d87c861-5bb2-4f79-8959-b2cebc28156d" Dec 10 12:18:44 crc kubenswrapper[4689]: E1210 12:18:44.323908 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 10 12:18:44 crc kubenswrapper[4689]: E1210 12:18:44.324901 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8scv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-727gf_openshift-marketplace(fd6edc95-9b5d-4438-b427-fd07f62090b7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 12:18:44 crc kubenswrapper[4689]: E1210 12:18:44.327024 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-727gf" podUID="fd6edc95-9b5d-4438-b427-fd07f62090b7" Dec 10 12:18:45 crc kubenswrapper[4689]: I1210 12:18:45.768592 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 12:18:47 crc kubenswrapper[4689]: E1210 12:18:47.636949 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-85rm9" podUID="2d87c861-5bb2-4f79-8959-b2cebc28156d" Dec 10 12:18:47 crc kubenswrapper[4689]: E1210 12:18:47.637048 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-727gf" podUID="fd6edc95-9b5d-4438-b427-fd07f62090b7" Dec 10 12:18:47 crc kubenswrapper[4689]: E1210 12:18:47.637066 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ks26t" podUID="c8d409aa-8f6d-4ed5-816c-e572e371d425" Dec 10 12:18:47 crc kubenswrapper[4689]: E1210 12:18:47.720028 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 10 12:18:47 crc kubenswrapper[4689]: E1210 12:18:47.720183 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zlgk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-9hpt8_openshift-marketplace(2ebd1829-eb7a-4128-ab9d-79a1f75fa39e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 12:18:47 crc kubenswrapper[4689]: E1210 12:18:47.721666 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-9hpt8" podUID="2ebd1829-eb7a-4128-ab9d-79a1f75fa39e" Dec 10 12:18:48 crc kubenswrapper[4689]: I1210 12:18:48.333102 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 10 12:18:48 crc kubenswrapper[4689]: I1210 12:18:48.334478 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 10 12:18:48 crc kubenswrapper[4689]: I1210 12:18:48.339123 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 10 12:18:48 crc kubenswrapper[4689]: I1210 12:18:48.451759 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba69b6ca-5550-41d8-be54-d86e80a6aea6-kube-api-access\") pod \"installer-9-crc\" (UID: \"ba69b6ca-5550-41d8-be54-d86e80a6aea6\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 12:18:48 crc kubenswrapper[4689]: I1210 12:18:48.451874 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ba69b6ca-5550-41d8-be54-d86e80a6aea6-var-lock\") pod \"installer-9-crc\" (UID: \"ba69b6ca-5550-41d8-be54-d86e80a6aea6\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 12:18:48 crc kubenswrapper[4689]: I1210 12:18:48.452036 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ba69b6ca-5550-41d8-be54-d86e80a6aea6-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ba69b6ca-5550-41d8-be54-d86e80a6aea6\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 12:18:48 crc kubenswrapper[4689]: I1210 12:18:48.553299 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ba69b6ca-5550-41d8-be54-d86e80a6aea6-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ba69b6ca-5550-41d8-be54-d86e80a6aea6\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 12:18:48 crc kubenswrapper[4689]: I1210 12:18:48.553378 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba69b6ca-5550-41d8-be54-d86e80a6aea6-kube-api-access\") pod \"installer-9-crc\" (UID: \"ba69b6ca-5550-41d8-be54-d86e80a6aea6\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 12:18:48 crc kubenswrapper[4689]: I1210 12:18:48.553423 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ba69b6ca-5550-41d8-be54-d86e80a6aea6-var-lock\") pod \"installer-9-crc\" (UID: \"ba69b6ca-5550-41d8-be54-d86e80a6aea6\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 12:18:48 crc kubenswrapper[4689]: I1210 12:18:48.553491 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ba69b6ca-5550-41d8-be54-d86e80a6aea6-var-lock\") pod \"installer-9-crc\" (UID: \"ba69b6ca-5550-41d8-be54-d86e80a6aea6\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 12:18:48 crc kubenswrapper[4689]: I1210 12:18:48.553540 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ba69b6ca-5550-41d8-be54-d86e80a6aea6-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ba69b6ca-5550-41d8-be54-d86e80a6aea6\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 12:18:48 crc kubenswrapper[4689]: I1210 12:18:48.588490 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba69b6ca-5550-41d8-be54-d86e80a6aea6-kube-api-access\") pod \"installer-9-crc\" (UID: \"ba69b6ca-5550-41d8-be54-d86e80a6aea6\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 12:18:48 crc kubenswrapper[4689]: I1210 12:18:48.662552 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 10 12:18:48 crc kubenswrapper[4689]: E1210 12:18:48.841232 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9hpt8" podUID="2ebd1829-eb7a-4128-ab9d-79a1f75fa39e" Dec 10 12:18:48 crc kubenswrapper[4689]: E1210 12:18:48.900486 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 10 12:18:48 crc kubenswrapper[4689]: E1210 12:18:48.900699 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t2xk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-p7d6k_openshift-marketplace(e6304a1d-54e7-4f3b-878d-456b88936891): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 12:18:48 crc kubenswrapper[4689]: E1210 12:18:48.902127 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-p7d6k" podUID="e6304a1d-54e7-4f3b-878d-456b88936891" Dec 10 12:18:48 crc kubenswrapper[4689]: E1210 12:18:48.929386 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 10 12:18:48 crc kubenswrapper[4689]: E1210 12:18:48.929554 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5sjfd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-rkh5c_openshift-marketplace(d51b889b-7485-4c32-84de-3ddfd7ce23e9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 12:18:48 crc kubenswrapper[4689]: E1210 12:18:48.931539 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-rkh5c" podUID="d51b889b-7485-4c32-84de-3ddfd7ce23e9" Dec 10 12:18:48 crc kubenswrapper[4689]: E1210 12:18:48.957473 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 10 12:18:48 crc kubenswrapper[4689]: E1210 12:18:48.957639 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k6b2s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-28k8w_openshift-marketplace(f0bf5778-7d94-4e68-8e6f-308482f98351): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 12:18:48 crc kubenswrapper[4689]: E1210 12:18:48.958780 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-28k8w" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" Dec 10 12:18:49 crc kubenswrapper[4689]: I1210 12:18:49.247234 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 10 12:18:49 crc kubenswrapper[4689]: W1210 12:18:49.260792 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod67253606_6335_4120_a1ac_557e53d8c470.slice/crio-b129384a6586acd755846915c2b772258b71f14517f9527c28fa7b80830811cf WatchSource:0}: Error finding container b129384a6586acd755846915c2b772258b71f14517f9527c28fa7b80830811cf: Status 404 returned error can't find the container with id b129384a6586acd755846915c2b772258b71f14517f9527c28fa7b80830811cf Dec 10 12:18:49 crc kubenswrapper[4689]: I1210 12:18:49.289389 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2h8hs"] Dec 10 12:18:49 crc kubenswrapper[4689]: I1210 12:18:49.296419 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 10 12:18:49 crc kubenswrapper[4689]: I1210 12:18:49.303027 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"67253606-6335-4120-a1ac-557e53d8c470","Type":"ContainerStarted","Data":"b129384a6586acd755846915c2b772258b71f14517f9527c28fa7b80830811cf"} Dec 10 12:18:49 crc kubenswrapper[4689]: E1210 12:18:49.305172 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-p7d6k" podUID="e6304a1d-54e7-4f3b-878d-456b88936891" Dec 10 12:18:49 crc kubenswrapper[4689]: E1210 12:18:49.305329 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rkh5c" podUID="d51b889b-7485-4c32-84de-3ddfd7ce23e9" Dec 10 12:18:49 crc kubenswrapper[4689]: E1210 12:18:49.305392 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-28k8w" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" Dec 10 12:18:50 crc kubenswrapper[4689]: I1210 12:18:50.312450 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2h8hs" event={"ID":"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8","Type":"ContainerStarted","Data":"e940df579ece06ca1ebfd665d8d8b0ac1f5d3cd85c51a4615088a6e86a27b443"} Dec 10 12:18:50 crc kubenswrapper[4689]: I1210 12:18:50.312823 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2h8hs" event={"ID":"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8","Type":"ContainerStarted","Data":"83f5194bb6fcf426c2e12ffc214af0daf7b384e0939cedae7a39c66dddd42285"} Dec 10 12:18:50 crc kubenswrapper[4689]: I1210 12:18:50.312846 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2h8hs" event={"ID":"3a7c472c-c2dd-4a9d-aefb-48ac5cc196f8","Type":"ContainerStarted","Data":"677b0923fae6dd566ed8f0b02085a05f8574a4539d6a627eec63392091c02d76"} Dec 10 12:18:50 crc kubenswrapper[4689]: I1210 12:18:50.315964 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"67253606-6335-4120-a1ac-557e53d8c470","Type":"ContainerStarted","Data":"27f078a381b9547eb657fca982a5fcfd01efa5f58531fc1335abbc5f2471e0cf"} Dec 10 12:18:50 crc kubenswrapper[4689]: I1210 12:18:50.320579 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ba69b6ca-5550-41d8-be54-d86e80a6aea6","Type":"ContainerStarted","Data":"9ada8ddd0edee6b57de988ac76cc9f5269e5ea15ab1b584bd6f51705d72cfe78"} Dec 10 12:18:50 crc kubenswrapper[4689]: I1210 12:18:50.320626 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ba69b6ca-5550-41d8-be54-d86e80a6aea6","Type":"ContainerStarted","Data":"09e79805cbc85365b53a1a06952880d62d36305923188632579c4465e23b8cf7"} Dec 10 12:18:50 crc kubenswrapper[4689]: I1210 12:18:50.347467 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-2h8hs" podStartSLOduration=169.347438626 podStartE2EDuration="2m49.347438626s" podCreationTimestamp="2025-12-10 12:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:50.341681065 +0000 UTC m=+198.129762223" watchObservedRunningTime="2025-12-10 12:18:50.347438626 +0000 UTC m=+198.135519804" Dec 10 12:18:50 crc kubenswrapper[4689]: I1210 12:18:50.379912 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=8.379885718 podStartE2EDuration="8.379885718s" podCreationTimestamp="2025-12-10 12:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:50.376027967 +0000 UTC m=+198.164109125" watchObservedRunningTime="2025-12-10 12:18:50.379885718 +0000 UTC m=+198.167966866" Dec 10 12:18:51 crc kubenswrapper[4689]: I1210 12:18:51.327450 4689 generic.go:334] "Generic (PLEG): container finished" podID="67253606-6335-4120-a1ac-557e53d8c470" containerID="27f078a381b9547eb657fca982a5fcfd01efa5f58531fc1335abbc5f2471e0cf" exitCode=0 Dec 10 12:18:51 crc kubenswrapper[4689]: I1210 12:18:51.327489 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"67253606-6335-4120-a1ac-557e53d8c470","Type":"ContainerDied","Data":"27f078a381b9547eb657fca982a5fcfd01efa5f58531fc1335abbc5f2471e0cf"} Dec 10 12:18:51 crc kubenswrapper[4689]: I1210 12:18:51.341513 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.341489859 podStartE2EDuration="3.341489859s" podCreationTimestamp="2025-12-10 12:18:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:18:50.422173778 +0000 UTC m=+198.210254916" watchObservedRunningTime="2025-12-10 12:18:51.341489859 +0000 UTC m=+199.129571037" Dec 10 12:18:52 crc kubenswrapper[4689]: I1210 12:18:52.585436 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 12:18:52 crc kubenswrapper[4689]: I1210 12:18:52.707443 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67253606-6335-4120-a1ac-557e53d8c470-kubelet-dir\") pod \"67253606-6335-4120-a1ac-557e53d8c470\" (UID: \"67253606-6335-4120-a1ac-557e53d8c470\") " Dec 10 12:18:52 crc kubenswrapper[4689]: I1210 12:18:52.707549 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67253606-6335-4120-a1ac-557e53d8c470-kube-api-access\") pod \"67253606-6335-4120-a1ac-557e53d8c470\" (UID: \"67253606-6335-4120-a1ac-557e53d8c470\") " Dec 10 12:18:52 crc kubenswrapper[4689]: I1210 12:18:52.708843 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67253606-6335-4120-a1ac-557e53d8c470-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "67253606-6335-4120-a1ac-557e53d8c470" (UID: "67253606-6335-4120-a1ac-557e53d8c470"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:18:52 crc kubenswrapper[4689]: I1210 12:18:52.714054 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67253606-6335-4120-a1ac-557e53d8c470-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "67253606-6335-4120-a1ac-557e53d8c470" (UID: "67253606-6335-4120-a1ac-557e53d8c470"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:18:52 crc kubenswrapper[4689]: I1210 12:18:52.809482 4689 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67253606-6335-4120-a1ac-557e53d8c470-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 10 12:18:52 crc kubenswrapper[4689]: I1210 12:18:52.809522 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67253606-6335-4120-a1ac-557e53d8c470-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 12:18:53 crc kubenswrapper[4689]: I1210 12:18:53.340343 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"67253606-6335-4120-a1ac-557e53d8c470","Type":"ContainerDied","Data":"b129384a6586acd755846915c2b772258b71f14517f9527c28fa7b80830811cf"} Dec 10 12:18:53 crc kubenswrapper[4689]: I1210 12:18:53.340388 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b129384a6586acd755846915c2b772258b71f14517f9527c28fa7b80830811cf" Dec 10 12:18:53 crc kubenswrapper[4689]: I1210 12:18:53.340449 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 12:18:58 crc kubenswrapper[4689]: I1210 12:18:58.368624 4689 generic.go:334] "Generic (PLEG): container finished" podID="23db100b-85ac-48e2-834b-741c9d94cf8f" containerID="6b271ff39a1b946fd98129c09345cee53c3d1126e3ba3c50af35922721535a36" exitCode=0 Dec 10 12:18:58 crc kubenswrapper[4689]: I1210 12:18:58.368718 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6whbg" event={"ID":"23db100b-85ac-48e2-834b-741c9d94cf8f","Type":"ContainerDied","Data":"6b271ff39a1b946fd98129c09345cee53c3d1126e3ba3c50af35922721535a36"} Dec 10 12:18:59 crc kubenswrapper[4689]: I1210 12:18:59.376051 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6whbg" event={"ID":"23db100b-85ac-48e2-834b-741c9d94cf8f","Type":"ContainerStarted","Data":"c9b8b0ffe112ba6eacb1e12926d0088d7152568c29cdbf6319b0ab22a15bd941"} Dec 10 12:18:59 crc kubenswrapper[4689]: I1210 12:18:59.398273 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6whbg" podStartSLOduration=2.286874407 podStartE2EDuration="56.398259023s" podCreationTimestamp="2025-12-10 12:18:03 +0000 UTC" firstStartedPulling="2025-12-10 12:18:04.744120495 +0000 UTC m=+152.532201633" lastFinishedPulling="2025-12-10 12:18:58.855505111 +0000 UTC m=+206.643586249" observedRunningTime="2025-12-10 12:18:59.396387933 +0000 UTC m=+207.184469071" watchObservedRunningTime="2025-12-10 12:18:59.398259023 +0000 UTC m=+207.186340161" Dec 10 12:19:01 crc kubenswrapper[4689]: I1210 12:19:01.387134 4689 generic.go:334] "Generic (PLEG): container finished" podID="c8d409aa-8f6d-4ed5-816c-e572e371d425" containerID="059e8153783f027bbbe033a8a38ef043e0a427a24c4350ab8f8331086abbfe69" exitCode=0 Dec 10 12:19:01 crc kubenswrapper[4689]: I1210 12:19:01.387212 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ks26t" event={"ID":"c8d409aa-8f6d-4ed5-816c-e572e371d425","Type":"ContainerDied","Data":"059e8153783f027bbbe033a8a38ef043e0a427a24c4350ab8f8331086abbfe69"} Dec 10 12:19:01 crc kubenswrapper[4689]: I1210 12:19:01.390039 4689 generic.go:334] "Generic (PLEG): container finished" podID="e6304a1d-54e7-4f3b-878d-456b88936891" containerID="e42dd1697bacc59c500a2df59bbeb6a6cb1ff0b409c38ec038f107e4037ed7ae" exitCode=0 Dec 10 12:19:01 crc kubenswrapper[4689]: I1210 12:19:01.390076 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7d6k" event={"ID":"e6304a1d-54e7-4f3b-878d-456b88936891","Type":"ContainerDied","Data":"e42dd1697bacc59c500a2df59bbeb6a6cb1ff0b409c38ec038f107e4037ed7ae"} Dec 10 12:19:02 crc kubenswrapper[4689]: I1210 12:19:02.396958 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85rm9" event={"ID":"2d87c861-5bb2-4f79-8959-b2cebc28156d","Type":"ContainerStarted","Data":"8f9c1bb135c9d1dc186e824e391fbe8175124ebab3ca118d7f460031fae6cb3b"} Dec 10 12:19:02 crc kubenswrapper[4689]: I1210 12:19:02.399746 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7d6k" event={"ID":"e6304a1d-54e7-4f3b-878d-456b88936891","Type":"ContainerStarted","Data":"4f82ac4b2385e0c1b06cad2feac6378b757057e4f66d5dd59abb89051626b848"} Dec 10 12:19:02 crc kubenswrapper[4689]: I1210 12:19:02.436445 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p7d6k" podStartSLOduration=4.012122429 podStartE2EDuration="1m1.436425572s" podCreationTimestamp="2025-12-10 12:18:01 +0000 UTC" firstStartedPulling="2025-12-10 12:18:04.756143702 +0000 UTC m=+152.544224840" lastFinishedPulling="2025-12-10 12:19:02.180446835 +0000 UTC m=+209.968527983" observedRunningTime="2025-12-10 12:19:02.433792313 +0000 UTC m=+210.221873451" watchObservedRunningTime="2025-12-10 12:19:02.436425572 +0000 UTC m=+210.224506710" Dec 10 12:19:03 crc kubenswrapper[4689]: I1210 12:19:03.407664 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ks26t" event={"ID":"c8d409aa-8f6d-4ed5-816c-e572e371d425","Type":"ContainerStarted","Data":"a946b96c40a3419ede8d391181b1b722ccbf17d45a9c8928c218593dd99b6e6a"} Dec 10 12:19:03 crc kubenswrapper[4689]: I1210 12:19:03.410373 4689 generic.go:334] "Generic (PLEG): container finished" podID="2d87c861-5bb2-4f79-8959-b2cebc28156d" containerID="8f9c1bb135c9d1dc186e824e391fbe8175124ebab3ca118d7f460031fae6cb3b" exitCode=0 Dec 10 12:19:03 crc kubenswrapper[4689]: I1210 12:19:03.410413 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85rm9" event={"ID":"2d87c861-5bb2-4f79-8959-b2cebc28156d","Type":"ContainerDied","Data":"8f9c1bb135c9d1dc186e824e391fbe8175124ebab3ca118d7f460031fae6cb3b"} Dec 10 12:19:03 crc kubenswrapper[4689]: I1210 12:19:03.428431 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ks26t" podStartSLOduration=3.639749569 podStartE2EDuration="1m2.428408061s" podCreationTimestamp="2025-12-10 12:18:01 +0000 UTC" firstStartedPulling="2025-12-10 12:18:03.73809199 +0000 UTC m=+151.526173118" lastFinishedPulling="2025-12-10 12:19:02.526750472 +0000 UTC m=+210.314831610" observedRunningTime="2025-12-10 12:19:03.423160083 +0000 UTC m=+211.211241211" watchObservedRunningTime="2025-12-10 12:19:03.428408061 +0000 UTC m=+211.216489209" Dec 10 12:19:03 crc kubenswrapper[4689]: I1210 12:19:03.444400 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6whbg" Dec 10 12:19:03 crc kubenswrapper[4689]: I1210 12:19:03.444439 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6whbg" Dec 10 12:19:03 crc kubenswrapper[4689]: I1210 12:19:03.893205 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6whbg" Dec 10 12:19:04 crc kubenswrapper[4689]: I1210 12:19:04.451081 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6whbg" Dec 10 12:19:07 crc kubenswrapper[4689]: I1210 12:19:07.166206 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 12:19:07 crc kubenswrapper[4689]: I1210 12:19:07.166566 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 12:19:07 crc kubenswrapper[4689]: I1210 12:19:07.166647 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" Dec 10 12:19:07 crc kubenswrapper[4689]: I1210 12:19:07.167455 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505"} pod="openshift-machine-config-operator/machine-config-daemon-db6zk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 12:19:07 crc kubenswrapper[4689]: I1210 12:19:07.167558 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" containerID="cri-o://5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505" gracePeriod=600 Dec 10 12:19:11 crc kubenswrapper[4689]: I1210 12:19:11.374703 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ks26t" Dec 10 12:19:11 crc kubenswrapper[4689]: I1210 12:19:11.374988 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ks26t" Dec 10 12:19:11 crc kubenswrapper[4689]: I1210 12:19:11.425918 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ks26t" Dec 10 12:19:11 crc kubenswrapper[4689]: I1210 12:19:11.988090 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p7d6k" Dec 10 12:19:11 crc kubenswrapper[4689]: I1210 12:19:11.989038 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p7d6k" Dec 10 12:19:12 crc kubenswrapper[4689]: I1210 12:19:12.030709 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p7d6k" Dec 10 12:19:12 crc kubenswrapper[4689]: I1210 12:19:12.574279 4689 generic.go:334] "Generic (PLEG): container finished" podID="a41ebdcd-910f-4669-992d-296e1a92162b" containerID="5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505" exitCode=0 Dec 10 12:19:12 crc kubenswrapper[4689]: I1210 12:19:12.574368 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" event={"ID":"a41ebdcd-910f-4669-992d-296e1a92162b","Type":"ContainerDied","Data":"5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505"} Dec 10 12:19:12 crc kubenswrapper[4689]: I1210 12:19:12.647986 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ks26t" Dec 10 12:19:13 crc kubenswrapper[4689]: I1210 12:19:13.620723 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p7d6k" Dec 10 12:19:13 crc kubenswrapper[4689]: I1210 12:19:13.669622 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p7d6k"] Dec 10 12:19:15 crc kubenswrapper[4689]: I1210 12:19:15.594683 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p7d6k" podUID="e6304a1d-54e7-4f3b-878d-456b88936891" containerName="registry-server" containerID="cri-o://4f82ac4b2385e0c1b06cad2feac6378b757057e4f66d5dd59abb89051626b848" gracePeriod=2 Dec 10 12:19:16 crc kubenswrapper[4689]: I1210 12:19:16.529320 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7jsv6"] Dec 10 12:19:17 crc kubenswrapper[4689]: I1210 12:19:17.610140 4689 generic.go:334] "Generic (PLEG): container finished" podID="e6304a1d-54e7-4f3b-878d-456b88936891" containerID="4f82ac4b2385e0c1b06cad2feac6378b757057e4f66d5dd59abb89051626b848" exitCode=0 Dec 10 12:19:17 crc kubenswrapper[4689]: I1210 12:19:17.610209 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7d6k" event={"ID":"e6304a1d-54e7-4f3b-878d-456b88936891","Type":"ContainerDied","Data":"4f82ac4b2385e0c1b06cad2feac6378b757057e4f66d5dd59abb89051626b848"} Dec 10 12:19:21 crc kubenswrapper[4689]: E1210 12:19:21.989211 4689 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f82ac4b2385e0c1b06cad2feac6378b757057e4f66d5dd59abb89051626b848 is running failed: container process not found" containerID="4f82ac4b2385e0c1b06cad2feac6378b757057e4f66d5dd59abb89051626b848" cmd=["grpc_health_probe","-addr=:50051"] Dec 10 12:19:21 crc kubenswrapper[4689]: E1210 12:19:21.990240 4689 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f82ac4b2385e0c1b06cad2feac6378b757057e4f66d5dd59abb89051626b848 is running failed: container process not found" containerID="4f82ac4b2385e0c1b06cad2feac6378b757057e4f66d5dd59abb89051626b848" cmd=["grpc_health_probe","-addr=:50051"] Dec 10 12:19:21 crc kubenswrapper[4689]: E1210 12:19:21.991029 4689 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f82ac4b2385e0c1b06cad2feac6378b757057e4f66d5dd59abb89051626b848 is running failed: container process not found" containerID="4f82ac4b2385e0c1b06cad2feac6378b757057e4f66d5dd59abb89051626b848" cmd=["grpc_health_probe","-addr=:50051"] Dec 10 12:19:21 crc kubenswrapper[4689]: E1210 12:19:21.991107 4689 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f82ac4b2385e0c1b06cad2feac6378b757057e4f66d5dd59abb89051626b848 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-p7d6k" podUID="e6304a1d-54e7-4f3b-878d-456b88936891" containerName="registry-server" Dec 10 12:19:24 crc kubenswrapper[4689]: I1210 12:19:24.305663 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7d6k" Dec 10 12:19:24 crc kubenswrapper[4689]: I1210 12:19:24.349902 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6304a1d-54e7-4f3b-878d-456b88936891-utilities\") pod \"e6304a1d-54e7-4f3b-878d-456b88936891\" (UID: \"e6304a1d-54e7-4f3b-878d-456b88936891\") " Dec 10 12:19:24 crc kubenswrapper[4689]: I1210 12:19:24.349963 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6304a1d-54e7-4f3b-878d-456b88936891-catalog-content\") pod \"e6304a1d-54e7-4f3b-878d-456b88936891\" (UID: \"e6304a1d-54e7-4f3b-878d-456b88936891\") " Dec 10 12:19:24 crc kubenswrapper[4689]: I1210 12:19:24.350028 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2xk6\" (UniqueName: \"kubernetes.io/projected/e6304a1d-54e7-4f3b-878d-456b88936891-kube-api-access-t2xk6\") pod \"e6304a1d-54e7-4f3b-878d-456b88936891\" (UID: \"e6304a1d-54e7-4f3b-878d-456b88936891\") " Dec 10 12:19:24 crc kubenswrapper[4689]: I1210 12:19:24.357105 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6304a1d-54e7-4f3b-878d-456b88936891-kube-api-access-t2xk6" (OuterVolumeSpecName: "kube-api-access-t2xk6") pod "e6304a1d-54e7-4f3b-878d-456b88936891" (UID: "e6304a1d-54e7-4f3b-878d-456b88936891"). InnerVolumeSpecName "kube-api-access-t2xk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:19:24 crc kubenswrapper[4689]: I1210 12:19:24.360275 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6304a1d-54e7-4f3b-878d-456b88936891-utilities" (OuterVolumeSpecName: "utilities") pod "e6304a1d-54e7-4f3b-878d-456b88936891" (UID: "e6304a1d-54e7-4f3b-878d-456b88936891"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:19:24 crc kubenswrapper[4689]: I1210 12:19:24.433204 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6304a1d-54e7-4f3b-878d-456b88936891-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6304a1d-54e7-4f3b-878d-456b88936891" (UID: "e6304a1d-54e7-4f3b-878d-456b88936891"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:19:24 crc kubenswrapper[4689]: I1210 12:19:24.452472 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6304a1d-54e7-4f3b-878d-456b88936891-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 12:19:24 crc kubenswrapper[4689]: I1210 12:19:24.452509 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2xk6\" (UniqueName: \"kubernetes.io/projected/e6304a1d-54e7-4f3b-878d-456b88936891-kube-api-access-t2xk6\") on node \"crc\" DevicePath \"\"" Dec 10 12:19:24 crc kubenswrapper[4689]: I1210 12:19:24.452525 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6304a1d-54e7-4f3b-878d-456b88936891-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 12:19:24 crc kubenswrapper[4689]: I1210 12:19:24.665778 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7d6k" event={"ID":"e6304a1d-54e7-4f3b-878d-456b88936891","Type":"ContainerDied","Data":"d26f31aa876ca991b82606f969cbc83aad05725b62f94c07406f7648cdc77aa9"} Dec 10 12:19:24 crc kubenswrapper[4689]: I1210 12:19:24.666088 4689 scope.go:117] "RemoveContainer" containerID="4f82ac4b2385e0c1b06cad2feac6378b757057e4f66d5dd59abb89051626b848" Dec 10 12:19:24 crc kubenswrapper[4689]: I1210 12:19:24.665847 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7d6k" Dec 10 12:19:24 crc kubenswrapper[4689]: I1210 12:19:24.671461 4689 generic.go:334] "Generic (PLEG): container finished" podID="fd6edc95-9b5d-4438-b427-fd07f62090b7" containerID="d78aea8e6fa05933eb70508746afb0a58e8832ae9c7035ce95b75c8184a32821" exitCode=0 Dec 10 12:19:24 crc kubenswrapper[4689]: I1210 12:19:24.671549 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-727gf" event={"ID":"fd6edc95-9b5d-4438-b427-fd07f62090b7","Type":"ContainerDied","Data":"d78aea8e6fa05933eb70508746afb0a58e8832ae9c7035ce95b75c8184a32821"} Dec 10 12:19:24 crc kubenswrapper[4689]: I1210 12:19:24.678766 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9hpt8" event={"ID":"2ebd1829-eb7a-4128-ab9d-79a1f75fa39e","Type":"ContainerStarted","Data":"c5d4f47d0e6bd36cf20e82d2a1314d2a24cf791894f3d04b5a6e9ae0ee9ef414"} Dec 10 12:19:24 crc kubenswrapper[4689]: I1210 12:19:24.681782 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85rm9" event={"ID":"2d87c861-5bb2-4f79-8959-b2cebc28156d","Type":"ContainerStarted","Data":"4a29722948055d63dab5417640525cc61e022fe47c9d76cae94afadbf84565c1"} Dec 10 12:19:24 crc kubenswrapper[4689]: I1210 12:19:24.683833 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" event={"ID":"a41ebdcd-910f-4669-992d-296e1a92162b","Type":"ContainerStarted","Data":"8aa9c40c0f7115c60e594bf12aea8548002c497e01622aaaefef974497c74a95"} Dec 10 12:19:24 crc kubenswrapper[4689]: I1210 12:19:24.685377 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28k8w" event={"ID":"f0bf5778-7d94-4e68-8e6f-308482f98351","Type":"ContainerStarted","Data":"8dab37f45edfb8a50ef1bce5b26807b179dafcdd2679b30b0e2fcecb4342d9ea"} Dec 10 12:19:24 crc kubenswrapper[4689]: I1210 12:19:24.690238 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkh5c" event={"ID":"d51b889b-7485-4c32-84de-3ddfd7ce23e9","Type":"ContainerStarted","Data":"0dcfa0584d0a814175c312f2a9bfb894ef7e593663a4490743fca644d604dd0a"} Dec 10 12:19:24 crc kubenswrapper[4689]: I1210 12:19:24.741312 4689 scope.go:117] "RemoveContainer" containerID="e42dd1697bacc59c500a2df59bbeb6a6cb1ff0b409c38ec038f107e4037ed7ae" Dec 10 12:19:24 crc kubenswrapper[4689]: I1210 12:19:24.771619 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-85rm9" podStartSLOduration=3.413483094 podStartE2EDuration="1m23.771599964s" podCreationTimestamp="2025-12-10 12:18:01 +0000 UTC" firstStartedPulling="2025-12-10 12:18:03.705813594 +0000 UTC m=+151.493894732" lastFinishedPulling="2025-12-10 12:19:24.063930464 +0000 UTC m=+231.852011602" observedRunningTime="2025-12-10 12:19:24.767467215 +0000 UTC m=+232.555548353" watchObservedRunningTime="2025-12-10 12:19:24.771599964 +0000 UTC m=+232.559681102" Dec 10 12:19:24 crc kubenswrapper[4689]: I1210 12:19:24.772290 4689 scope.go:117] "RemoveContainer" containerID="c7110b069837ad3480e89c64c7d9356e712a33d081c234ec5a5b0405961281d9" Dec 10 12:19:24 crc kubenswrapper[4689]: I1210 12:19:24.782898 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p7d6k"] Dec 10 12:19:24 crc kubenswrapper[4689]: I1210 12:19:24.785690 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p7d6k"] Dec 10 12:19:25 crc kubenswrapper[4689]: I1210 12:19:25.700005 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-727gf" event={"ID":"fd6edc95-9b5d-4438-b427-fd07f62090b7","Type":"ContainerStarted","Data":"4cf68a0dcaf92c2a219f9bbf7eeef07a8476563e9ca04275674db3a7d3e2284d"} Dec 10 12:19:25 crc kubenswrapper[4689]: I1210 12:19:25.701888 4689 generic.go:334] "Generic (PLEG): container finished" podID="2ebd1829-eb7a-4128-ab9d-79a1f75fa39e" containerID="c5d4f47d0e6bd36cf20e82d2a1314d2a24cf791894f3d04b5a6e9ae0ee9ef414" exitCode=0 Dec 10 12:19:25 crc kubenswrapper[4689]: I1210 12:19:25.701955 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9hpt8" event={"ID":"2ebd1829-eb7a-4128-ab9d-79a1f75fa39e","Type":"ContainerDied","Data":"c5d4f47d0e6bd36cf20e82d2a1314d2a24cf791894f3d04b5a6e9ae0ee9ef414"} Dec 10 12:19:25 crc kubenswrapper[4689]: I1210 12:19:25.703543 4689 generic.go:334] "Generic (PLEG): container finished" podID="f0bf5778-7d94-4e68-8e6f-308482f98351" containerID="8dab37f45edfb8a50ef1bce5b26807b179dafcdd2679b30b0e2fcecb4342d9ea" exitCode=0 Dec 10 12:19:25 crc kubenswrapper[4689]: I1210 12:19:25.703611 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28k8w" event={"ID":"f0bf5778-7d94-4e68-8e6f-308482f98351","Type":"ContainerDied","Data":"8dab37f45edfb8a50ef1bce5b26807b179dafcdd2679b30b0e2fcecb4342d9ea"} Dec 10 12:19:25 crc kubenswrapper[4689]: I1210 12:19:25.707557 4689 generic.go:334] "Generic (PLEG): container finished" podID="d51b889b-7485-4c32-84de-3ddfd7ce23e9" containerID="0dcfa0584d0a814175c312f2a9bfb894ef7e593663a4490743fca644d604dd0a" exitCode=0 Dec 10 12:19:25 crc kubenswrapper[4689]: I1210 12:19:25.707618 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkh5c" event={"ID":"d51b889b-7485-4c32-84de-3ddfd7ce23e9","Type":"ContainerDied","Data":"0dcfa0584d0a814175c312f2a9bfb894ef7e593663a4490743fca644d604dd0a"} Dec 10 12:19:25 crc kubenswrapper[4689]: I1210 12:19:25.724552 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-727gf" podStartSLOduration=2.288300822 podStartE2EDuration="1m22.724537337s" podCreationTimestamp="2025-12-10 12:18:03 +0000 UTC" firstStartedPulling="2025-12-10 12:18:04.770402803 +0000 UTC m=+152.558483941" lastFinishedPulling="2025-12-10 12:19:25.206639308 +0000 UTC m=+232.994720456" observedRunningTime="2025-12-10 12:19:25.721449857 +0000 UTC m=+233.509530995" watchObservedRunningTime="2025-12-10 12:19:25.724537337 +0000 UTC m=+233.512618475" Dec 10 12:19:26 crc kubenswrapper[4689]: I1210 12:19:26.504670 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6304a1d-54e7-4f3b-878d-456b88936891" path="/var/lib/kubelet/pods/e6304a1d-54e7-4f3b-878d-456b88936891/volumes" Dec 10 12:19:26 crc kubenswrapper[4689]: I1210 12:19:26.716832 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9hpt8" event={"ID":"2ebd1829-eb7a-4128-ab9d-79a1f75fa39e","Type":"ContainerStarted","Data":"64ed0b2ce9390d1b998e0fa118980bd26e4b95e7b75f6568e4e4b85d982863c8"} Dec 10 12:19:26 crc kubenswrapper[4689]: I1210 12:19:26.718396 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkh5c" event={"ID":"d51b889b-7485-4c32-84de-3ddfd7ce23e9","Type":"ContainerStarted","Data":"238a193f555fb45659447c913c44b694040adb4977f7082de9a50e6f441f97fd"} Dec 10 12:19:26 crc kubenswrapper[4689]: I1210 12:19:26.736330 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9hpt8" podStartSLOduration=2.4733838280000002 podStartE2EDuration="1m22.736316646s" podCreationTimestamp="2025-12-10 12:18:04 +0000 UTC" firstStartedPulling="2025-12-10 12:18:05.838067148 +0000 UTC m=+153.626148286" lastFinishedPulling="2025-12-10 12:19:26.100999966 +0000 UTC m=+233.889081104" observedRunningTime="2025-12-10 12:19:26.733497092 +0000 UTC m=+234.521578250" watchObservedRunningTime="2025-12-10 12:19:26.736316646 +0000 UTC m=+234.524397784" Dec 10 12:19:26 crc kubenswrapper[4689]: I1210 12:19:26.754209 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rkh5c" podStartSLOduration=3.11402183 podStartE2EDuration="1m25.754187415s" podCreationTimestamp="2025-12-10 12:18:01 +0000 UTC" firstStartedPulling="2025-12-10 12:18:03.727705494 +0000 UTC m=+151.515786632" lastFinishedPulling="2025-12-10 12:19:26.367871039 +0000 UTC m=+234.155952217" observedRunningTime="2025-12-10 12:19:26.752196982 +0000 UTC m=+234.540278130" watchObservedRunningTime="2025-12-10 12:19:26.754187415 +0000 UTC m=+234.542268563" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.306747 4689 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 10 12:19:27 crc kubenswrapper[4689]: E1210 12:19:27.307020 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67253606-6335-4120-a1ac-557e53d8c470" containerName="pruner" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.307034 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="67253606-6335-4120-a1ac-557e53d8c470" containerName="pruner" Dec 10 12:19:27 crc kubenswrapper[4689]: E1210 12:19:27.307051 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6304a1d-54e7-4f3b-878d-456b88936891" containerName="registry-server" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.307060 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6304a1d-54e7-4f3b-878d-456b88936891" containerName="registry-server" Dec 10 12:19:27 crc kubenswrapper[4689]: E1210 12:19:27.307078 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6304a1d-54e7-4f3b-878d-456b88936891" containerName="extract-utilities" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.307087 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6304a1d-54e7-4f3b-878d-456b88936891" containerName="extract-utilities" Dec 10 12:19:27 crc kubenswrapper[4689]: E1210 12:19:27.307102 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6304a1d-54e7-4f3b-878d-456b88936891" containerName="extract-content" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.307110 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6304a1d-54e7-4f3b-878d-456b88936891" containerName="extract-content" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.307224 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6304a1d-54e7-4f3b-878d-456b88936891" containerName="registry-server" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.307236 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="67253606-6335-4120-a1ac-557e53d8c470" containerName="pruner" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.307659 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.308074 4689 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.308538 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351" gracePeriod=15 Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.308575 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d" gracePeriod=15 Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.308610 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439" gracePeriod=15 Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.308555 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da" gracePeriod=15 Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.308583 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f" gracePeriod=15 Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.309152 4689 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 10 12:19:27 crc kubenswrapper[4689]: E1210 12:19:27.309313 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.309332 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 10 12:19:27 crc kubenswrapper[4689]: E1210 12:19:27.309346 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.309354 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 10 12:19:27 crc kubenswrapper[4689]: E1210 12:19:27.309364 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.309372 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 10 12:19:27 crc kubenswrapper[4689]: E1210 12:19:27.309380 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.309387 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 10 12:19:27 crc kubenswrapper[4689]: E1210 12:19:27.309401 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.309408 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 10 12:19:27 crc kubenswrapper[4689]: E1210 12:19:27.309419 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.309428 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 10 12:19:27 crc kubenswrapper[4689]: E1210 12:19:27.309442 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.309450 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.309560 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.309575 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.309586 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.309594 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.309603 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.309614 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 10 12:19:27 crc kubenswrapper[4689]: E1210 12:19:27.373325 4689 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.163:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-28k8w.187fd9e786aab6bd openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-28k8w,UID:f0bf5778-7d94-4e68-8e6f-308482f98351,APIVersion:v1,ResourceVersion:28461,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 12:19:27.372531389 +0000 UTC m=+235.160612537,LastTimestamp:2025-12-10 12:19:27.372531389 +0000 UTC m=+235.160612537,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.396538 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.396595 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.396647 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.396727 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.396750 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.396767 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.396808 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.396827 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.497598 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.497660 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.497687 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.497743 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.497777 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.497795 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.497819 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.497846 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.497922 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.498379 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.498424 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.498475 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.498502 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.498527 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.498555 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.498597 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.724818 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28k8w" event={"ID":"f0bf5778-7d94-4e68-8e6f-308482f98351","Type":"ContainerStarted","Data":"f0dcf96fdf98490b58c1555682c21f58b18746242f0394c9ec98be6fa10c2ee1"} Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.725469 4689 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.725661 4689 status_manager.go:851] "Failed to get status for pod" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" pod="openshift-marketplace/redhat-operators-28k8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-28k8w\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.726778 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.727612 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.728193 4689 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da" exitCode=0 Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.728215 4689 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439" exitCode=0 Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.728222 4689 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d" exitCode=0 Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.728228 4689 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f" exitCode=2 Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.728252 4689 scope.go:117] "RemoveContainer" containerID="cc1b41d65298f8986fa477bbe4462be91f61b46be4420fac49fe9b1cff91d4d3" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.729515 4689 generic.go:334] "Generic (PLEG): container finished" podID="ba69b6ca-5550-41d8-be54-d86e80a6aea6" containerID="9ada8ddd0edee6b57de988ac76cc9f5269e5ea15ab1b584bd6f51705d72cfe78" exitCode=0 Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.729594 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ba69b6ca-5550-41d8-be54-d86e80a6aea6","Type":"ContainerDied","Data":"9ada8ddd0edee6b57de988ac76cc9f5269e5ea15ab1b584bd6f51705d72cfe78"} Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.730064 4689 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.730477 4689 status_manager.go:851] "Failed to get status for pod" podUID="ba69b6ca-5550-41d8-be54-d86e80a6aea6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:27 crc kubenswrapper[4689]: I1210 12:19:27.730796 4689 status_manager.go:851] "Failed to get status for pod" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" pod="openshift-marketplace/redhat-operators-28k8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-28k8w\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:28 crc kubenswrapper[4689]: I1210 12:19:28.739312 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 10 12:19:28 crc kubenswrapper[4689]: I1210 12:19:28.977301 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 10 12:19:28 crc kubenswrapper[4689]: I1210 12:19:28.977993 4689 status_manager.go:851] "Failed to get status for pod" podUID="ba69b6ca-5550-41d8-be54-d86e80a6aea6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:28 crc kubenswrapper[4689]: I1210 12:19:28.978150 4689 status_manager.go:851] "Failed to get status for pod" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" pod="openshift-marketplace/redhat-operators-28k8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-28k8w\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.117494 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba69b6ca-5550-41d8-be54-d86e80a6aea6-kube-api-access\") pod \"ba69b6ca-5550-41d8-be54-d86e80a6aea6\" (UID: \"ba69b6ca-5550-41d8-be54-d86e80a6aea6\") " Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.117602 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ba69b6ca-5550-41d8-be54-d86e80a6aea6-var-lock\") pod \"ba69b6ca-5550-41d8-be54-d86e80a6aea6\" (UID: \"ba69b6ca-5550-41d8-be54-d86e80a6aea6\") " Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.117621 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ba69b6ca-5550-41d8-be54-d86e80a6aea6-kubelet-dir\") pod \"ba69b6ca-5550-41d8-be54-d86e80a6aea6\" (UID: \"ba69b6ca-5550-41d8-be54-d86e80a6aea6\") " Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.117847 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba69b6ca-5550-41d8-be54-d86e80a6aea6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ba69b6ca-5550-41d8-be54-d86e80a6aea6" (UID: "ba69b6ca-5550-41d8-be54-d86e80a6aea6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.118000 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba69b6ca-5550-41d8-be54-d86e80a6aea6-var-lock" (OuterVolumeSpecName: "var-lock") pod "ba69b6ca-5550-41d8-be54-d86e80a6aea6" (UID: "ba69b6ca-5550-41d8-be54-d86e80a6aea6"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.125247 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba69b6ca-5550-41d8-be54-d86e80a6aea6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ba69b6ca-5550-41d8-be54-d86e80a6aea6" (UID: "ba69b6ca-5550-41d8-be54-d86e80a6aea6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.234783 4689 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ba69b6ca-5550-41d8-be54-d86e80a6aea6-var-lock\") on node \"crc\" DevicePath \"\"" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.235143 4689 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ba69b6ca-5550-41d8-be54-d86e80a6aea6-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.235231 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba69b6ca-5550-41d8-be54-d86e80a6aea6-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 12:19:29 crc kubenswrapper[4689]: E1210 12:19:29.627608 4689 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:29 crc kubenswrapper[4689]: E1210 12:19:29.628708 4689 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:29 crc kubenswrapper[4689]: E1210 12:19:29.629196 4689 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:29 crc kubenswrapper[4689]: E1210 12:19:29.629790 4689 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:29 crc kubenswrapper[4689]: E1210 12:19:29.630203 4689 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.630241 4689 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 10 12:19:29 crc kubenswrapper[4689]: E1210 12:19:29.630567 4689 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="200ms" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.709642 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.710597 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.711418 4689 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.711958 4689 status_manager.go:851] "Failed to get status for pod" podUID="ba69b6ca-5550-41d8-be54-d86e80a6aea6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.712641 4689 status_manager.go:851] "Failed to get status for pod" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" pod="openshift-marketplace/redhat-operators-28k8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-28k8w\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.742315 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.742392 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.742405 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.742458 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.742470 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.742613 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.742880 4689 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.742901 4689 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.742919 4689 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.748790 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.749811 4689 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351" exitCode=0 Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.749922 4689 scope.go:117] "RemoveContainer" containerID="5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.750075 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.752562 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ba69b6ca-5550-41d8-be54-d86e80a6aea6","Type":"ContainerDied","Data":"09e79805cbc85365b53a1a06952880d62d36305923188632579c4465e23b8cf7"} Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.752591 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09e79805cbc85365b53a1a06952880d62d36305923188632579c4465e23b8cf7" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.752669 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.767178 4689 scope.go:117] "RemoveContainer" containerID="63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.782212 4689 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.782426 4689 status_manager.go:851] "Failed to get status for pod" podUID="ba69b6ca-5550-41d8-be54-d86e80a6aea6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.782717 4689 status_manager.go:851] "Failed to get status for pod" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" pod="openshift-marketplace/redhat-operators-28k8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-28k8w\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.783024 4689 status_manager.go:851] "Failed to get status for pod" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" pod="openshift-marketplace/redhat-operators-28k8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-28k8w\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.783304 4689 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.783569 4689 status_manager.go:851] "Failed to get status for pod" podUID="ba69b6ca-5550-41d8-be54-d86e80a6aea6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.784574 4689 scope.go:117] "RemoveContainer" containerID="e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.797935 4689 scope.go:117] "RemoveContainer" containerID="394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.812063 4689 scope.go:117] "RemoveContainer" containerID="73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.832356 4689 scope.go:117] "RemoveContainer" containerID="a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e" Dec 10 12:19:29 crc kubenswrapper[4689]: E1210 12:19:29.832617 4689 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="400ms" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.857794 4689 scope.go:117] "RemoveContainer" containerID="5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da" Dec 10 12:19:29 crc kubenswrapper[4689]: E1210 12:19:29.858363 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\": container with ID starting with 5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da not found: ID does not exist" containerID="5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.858415 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da"} err="failed to get container status \"5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\": rpc error: code = NotFound desc = could not find container \"5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da\": container with ID starting with 5d3818f3ff6d23a4b5f19b82c3ce7c6a9a0275f981966b78d0f1d5977acfc5da not found: ID does not exist" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.858447 4689 scope.go:117] "RemoveContainer" containerID="63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439" Dec 10 12:19:29 crc kubenswrapper[4689]: E1210 12:19:29.858863 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\": container with ID starting with 63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439 not found: ID does not exist" containerID="63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.858930 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439"} err="failed to get container status \"63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\": rpc error: code = NotFound desc = could not find container \"63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439\": container with ID starting with 63c6b92fa349dafbaaa4e2700676f8c1c53627fc703dba25853c7457bf7eb439 not found: ID does not exist" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.858960 4689 scope.go:117] "RemoveContainer" containerID="e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d" Dec 10 12:19:29 crc kubenswrapper[4689]: E1210 12:19:29.859330 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\": container with ID starting with e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d not found: ID does not exist" containerID="e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.859367 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d"} err="failed to get container status \"e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\": rpc error: code = NotFound desc = could not find container \"e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d\": container with ID starting with e167a00c3a4e40154ca58094cba41a896104fd81044158589ed4d7a06dba9e7d not found: ID does not exist" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.859387 4689 scope.go:117] "RemoveContainer" containerID="394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f" Dec 10 12:19:29 crc kubenswrapper[4689]: E1210 12:19:29.859745 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\": container with ID starting with 394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f not found: ID does not exist" containerID="394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.859788 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f"} err="failed to get container status \"394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\": rpc error: code = NotFound desc = could not find container \"394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f\": container with ID starting with 394f62bb6c4fa6d9b63f8404087ff96594e2e38240c6c7886ad71fbb7863681f not found: ID does not exist" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.859815 4689 scope.go:117] "RemoveContainer" containerID="73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351" Dec 10 12:19:29 crc kubenswrapper[4689]: E1210 12:19:29.860144 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\": container with ID starting with 73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351 not found: ID does not exist" containerID="73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.860174 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351"} err="failed to get container status \"73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\": rpc error: code = NotFound desc = could not find container \"73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351\": container with ID starting with 73fb5dc9c58a2a2027866853aecf404bcc2babdb9e5bbd3cf0efcb16f38ba351 not found: ID does not exist" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.860194 4689 scope.go:117] "RemoveContainer" containerID="a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e" Dec 10 12:19:29 crc kubenswrapper[4689]: E1210 12:19:29.860415 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\": container with ID starting with a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e not found: ID does not exist" containerID="a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e" Dec 10 12:19:29 crc kubenswrapper[4689]: I1210 12:19:29.860441 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e"} err="failed to get container status \"a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\": rpc error: code = NotFound desc = could not find container \"a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e\": container with ID starting with a9e2628add435f4ba7d9b92d1f4e9cf619c37e8dc954d6fce13ef72581ae775e not found: ID does not exist" Dec 10 12:19:30 crc kubenswrapper[4689]: E1210 12:19:30.233670 4689 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="800ms" Dec 10 12:19:30 crc kubenswrapper[4689]: E1210 12:19:30.477474 4689 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.163:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-28k8w.187fd9e786aab6bd openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-28k8w,UID:f0bf5778-7d94-4e68-8e6f-308482f98351,APIVersion:v1,ResourceVersion:28461,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 12:19:27.372531389 +0000 UTC m=+235.160612537,LastTimestamp:2025-12-10 12:19:27.372531389 +0000 UTC m=+235.160612537,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 12:19:30 crc kubenswrapper[4689]: I1210 12:19:30.504844 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 10 12:19:31 crc kubenswrapper[4689]: E1210 12:19:31.035111 4689 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="1.6s" Dec 10 12:19:31 crc kubenswrapper[4689]: I1210 12:19:31.572742 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rkh5c" Dec 10 12:19:31 crc kubenswrapper[4689]: I1210 12:19:31.572807 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rkh5c" Dec 10 12:19:31 crc kubenswrapper[4689]: I1210 12:19:31.643743 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rkh5c" Dec 10 12:19:31 crc kubenswrapper[4689]: I1210 12:19:31.644885 4689 status_manager.go:851] "Failed to get status for pod" podUID="d51b889b-7485-4c32-84de-3ddfd7ce23e9" pod="openshift-marketplace/community-operators-rkh5c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rkh5c\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:31 crc kubenswrapper[4689]: I1210 12:19:31.645462 4689 status_manager.go:851] "Failed to get status for pod" podUID="ba69b6ca-5550-41d8-be54-d86e80a6aea6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:31 crc kubenswrapper[4689]: I1210 12:19:31.646254 4689 status_manager.go:851] "Failed to get status for pod" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" pod="openshift-marketplace/redhat-operators-28k8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-28k8w\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:31 crc kubenswrapper[4689]: I1210 12:19:31.817703 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rkh5c" Dec 10 12:19:31 crc kubenswrapper[4689]: I1210 12:19:31.818484 4689 status_manager.go:851] "Failed to get status for pod" podUID="d51b889b-7485-4c32-84de-3ddfd7ce23e9" pod="openshift-marketplace/community-operators-rkh5c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rkh5c\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:31 crc kubenswrapper[4689]: I1210 12:19:31.819128 4689 status_manager.go:851] "Failed to get status for pod" podUID="ba69b6ca-5550-41d8-be54-d86e80a6aea6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:31 crc kubenswrapper[4689]: I1210 12:19:31.819736 4689 status_manager.go:851] "Failed to get status for pod" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" pod="openshift-marketplace/redhat-operators-28k8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-28k8w\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:32 crc kubenswrapper[4689]: I1210 12:19:32.072632 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-85rm9" Dec 10 12:19:32 crc kubenswrapper[4689]: I1210 12:19:32.072704 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-85rm9" Dec 10 12:19:32 crc kubenswrapper[4689]: I1210 12:19:32.122340 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-85rm9" Dec 10 12:19:32 crc kubenswrapper[4689]: I1210 12:19:32.123020 4689 status_manager.go:851] "Failed to get status for pod" podUID="d51b889b-7485-4c32-84de-3ddfd7ce23e9" pod="openshift-marketplace/community-operators-rkh5c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rkh5c\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:32 crc kubenswrapper[4689]: I1210 12:19:32.123661 4689 status_manager.go:851] "Failed to get status for pod" podUID="ba69b6ca-5550-41d8-be54-d86e80a6aea6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:32 crc kubenswrapper[4689]: I1210 12:19:32.124408 4689 status_manager.go:851] "Failed to get status for pod" podUID="2d87c861-5bb2-4f79-8959-b2cebc28156d" pod="openshift-marketplace/certified-operators-85rm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-85rm9\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:32 crc kubenswrapper[4689]: I1210 12:19:32.125023 4689 status_manager.go:851] "Failed to get status for pod" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" pod="openshift-marketplace/redhat-operators-28k8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-28k8w\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:32 crc kubenswrapper[4689]: E1210 12:19:32.364497 4689 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.163:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 12:19:32 crc kubenswrapper[4689]: I1210 12:19:32.364962 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 12:19:32 crc kubenswrapper[4689]: I1210 12:19:32.518556 4689 status_manager.go:851] "Failed to get status for pod" podUID="2d87c861-5bb2-4f79-8959-b2cebc28156d" pod="openshift-marketplace/certified-operators-85rm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-85rm9\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:32 crc kubenswrapper[4689]: I1210 12:19:32.521592 4689 status_manager.go:851] "Failed to get status for pod" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" pod="openshift-marketplace/redhat-operators-28k8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-28k8w\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:32 crc kubenswrapper[4689]: I1210 12:19:32.522241 4689 status_manager.go:851] "Failed to get status for pod" podUID="d51b889b-7485-4c32-84de-3ddfd7ce23e9" pod="openshift-marketplace/community-operators-rkh5c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rkh5c\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:32 crc kubenswrapper[4689]: I1210 12:19:32.522794 4689 status_manager.go:851] "Failed to get status for pod" podUID="ba69b6ca-5550-41d8-be54-d86e80a6aea6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:32 crc kubenswrapper[4689]: E1210 12:19:32.636487 4689 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="3.2s" Dec 10 12:19:32 crc kubenswrapper[4689]: I1210 12:19:32.773640 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"91633ee369af80eece9c820924322ebcb6848421a3cd8c0afad11d90832792ad"} Dec 10 12:19:32 crc kubenswrapper[4689]: I1210 12:19:32.822193 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-85rm9" Dec 10 12:19:32 crc kubenswrapper[4689]: I1210 12:19:32.822603 4689 status_manager.go:851] "Failed to get status for pod" podUID="d51b889b-7485-4c32-84de-3ddfd7ce23e9" pod="openshift-marketplace/community-operators-rkh5c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rkh5c\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:32 crc kubenswrapper[4689]: I1210 12:19:32.822997 4689 status_manager.go:851] "Failed to get status for pod" podUID="ba69b6ca-5550-41d8-be54-d86e80a6aea6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:32 crc kubenswrapper[4689]: I1210 12:19:32.823212 4689 status_manager.go:851] "Failed to get status for pod" podUID="2d87c861-5bb2-4f79-8959-b2cebc28156d" pod="openshift-marketplace/certified-operators-85rm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-85rm9\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:32 crc kubenswrapper[4689]: I1210 12:19:32.823423 4689 status_manager.go:851] "Failed to get status for pod" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" pod="openshift-marketplace/redhat-operators-28k8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-28k8w\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:33 crc kubenswrapper[4689]: I1210 12:19:33.763727 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-727gf" Dec 10 12:19:33 crc kubenswrapper[4689]: I1210 12:19:33.764063 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-727gf" Dec 10 12:19:33 crc kubenswrapper[4689]: I1210 12:19:33.781531 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"dea90fc483f52e820a200ffba777e0efd8adeeaf9b8a1f5ee86dd084a8d237a7"} Dec 10 12:19:33 crc kubenswrapper[4689]: I1210 12:19:33.782464 4689 status_manager.go:851] "Failed to get status for pod" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" pod="openshift-marketplace/redhat-operators-28k8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-28k8w\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:33 crc kubenswrapper[4689]: E1210 12:19:33.782936 4689 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.163:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 12:19:33 crc kubenswrapper[4689]: I1210 12:19:33.782948 4689 status_manager.go:851] "Failed to get status for pod" podUID="d51b889b-7485-4c32-84de-3ddfd7ce23e9" pod="openshift-marketplace/community-operators-rkh5c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rkh5c\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:33 crc kubenswrapper[4689]: I1210 12:19:33.783325 4689 status_manager.go:851] "Failed to get status for pod" podUID="ba69b6ca-5550-41d8-be54-d86e80a6aea6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:33 crc kubenswrapper[4689]: I1210 12:19:33.783730 4689 status_manager.go:851] "Failed to get status for pod" podUID="2d87c861-5bb2-4f79-8959-b2cebc28156d" pod="openshift-marketplace/certified-operators-85rm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-85rm9\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:33 crc kubenswrapper[4689]: I1210 12:19:33.829057 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-727gf" Dec 10 12:19:33 crc kubenswrapper[4689]: I1210 12:19:33.829733 4689 status_manager.go:851] "Failed to get status for pod" podUID="fd6edc95-9b5d-4438-b427-fd07f62090b7" pod="openshift-marketplace/redhat-marketplace-727gf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-727gf\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:33 crc kubenswrapper[4689]: I1210 12:19:33.830358 4689 status_manager.go:851] "Failed to get status for pod" podUID="ba69b6ca-5550-41d8-be54-d86e80a6aea6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:33 crc kubenswrapper[4689]: I1210 12:19:33.830888 4689 status_manager.go:851] "Failed to get status for pod" podUID="d51b889b-7485-4c32-84de-3ddfd7ce23e9" pod="openshift-marketplace/community-operators-rkh5c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rkh5c\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:33 crc kubenswrapper[4689]: I1210 12:19:33.831279 4689 status_manager.go:851] "Failed to get status for pod" podUID="2d87c861-5bb2-4f79-8959-b2cebc28156d" pod="openshift-marketplace/certified-operators-85rm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-85rm9\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:33 crc kubenswrapper[4689]: I1210 12:19:33.831798 4689 status_manager.go:851] "Failed to get status for pod" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" pod="openshift-marketplace/redhat-operators-28k8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-28k8w\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:33 crc kubenswrapper[4689]: I1210 12:19:33.874789 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-727gf" Dec 10 12:19:33 crc kubenswrapper[4689]: I1210 12:19:33.875608 4689 status_manager.go:851] "Failed to get status for pod" podUID="fd6edc95-9b5d-4438-b427-fd07f62090b7" pod="openshift-marketplace/redhat-marketplace-727gf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-727gf\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:33 crc kubenswrapper[4689]: I1210 12:19:33.876165 4689 status_manager.go:851] "Failed to get status for pod" podUID="d51b889b-7485-4c32-84de-3ddfd7ce23e9" pod="openshift-marketplace/community-operators-rkh5c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rkh5c\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:33 crc kubenswrapper[4689]: I1210 12:19:33.876612 4689 status_manager.go:851] "Failed to get status for pod" podUID="ba69b6ca-5550-41d8-be54-d86e80a6aea6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:33 crc kubenswrapper[4689]: I1210 12:19:33.876962 4689 status_manager.go:851] "Failed to get status for pod" podUID="2d87c861-5bb2-4f79-8959-b2cebc28156d" pod="openshift-marketplace/certified-operators-85rm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-85rm9\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:33 crc kubenswrapper[4689]: I1210 12:19:33.877322 4689 status_manager.go:851] "Failed to get status for pod" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" pod="openshift-marketplace/redhat-operators-28k8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-28k8w\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:34 crc kubenswrapper[4689]: I1210 12:19:34.604241 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9hpt8" Dec 10 12:19:34 crc kubenswrapper[4689]: I1210 12:19:34.604338 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9hpt8" Dec 10 12:19:34 crc kubenswrapper[4689]: I1210 12:19:34.675670 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9hpt8" Dec 10 12:19:34 crc kubenswrapper[4689]: I1210 12:19:34.676524 4689 status_manager.go:851] "Failed to get status for pod" podUID="fd6edc95-9b5d-4438-b427-fd07f62090b7" pod="openshift-marketplace/redhat-marketplace-727gf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-727gf\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:34 crc kubenswrapper[4689]: I1210 12:19:34.677214 4689 status_manager.go:851] "Failed to get status for pod" podUID="d51b889b-7485-4c32-84de-3ddfd7ce23e9" pod="openshift-marketplace/community-operators-rkh5c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rkh5c\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:34 crc kubenswrapper[4689]: I1210 12:19:34.677848 4689 status_manager.go:851] "Failed to get status for pod" podUID="ba69b6ca-5550-41d8-be54-d86e80a6aea6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:34 crc kubenswrapper[4689]: I1210 12:19:34.678371 4689 status_manager.go:851] "Failed to get status for pod" podUID="2d87c861-5bb2-4f79-8959-b2cebc28156d" pod="openshift-marketplace/certified-operators-85rm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-85rm9\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:34 crc kubenswrapper[4689]: I1210 12:19:34.678740 4689 status_manager.go:851] "Failed to get status for pod" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" pod="openshift-marketplace/redhat-operators-28k8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-28k8w\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:34 crc kubenswrapper[4689]: I1210 12:19:34.679211 4689 status_manager.go:851] "Failed to get status for pod" podUID="2ebd1829-eb7a-4128-ab9d-79a1f75fa39e" pod="openshift-marketplace/redhat-operators-9hpt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9hpt8\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:34 crc kubenswrapper[4689]: E1210 12:19:34.789145 4689 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.163:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 12:19:34 crc kubenswrapper[4689]: I1210 12:19:34.847634 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9hpt8" Dec 10 12:19:34 crc kubenswrapper[4689]: I1210 12:19:34.848530 4689 status_manager.go:851] "Failed to get status for pod" podUID="fd6edc95-9b5d-4438-b427-fd07f62090b7" pod="openshift-marketplace/redhat-marketplace-727gf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-727gf\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:34 crc kubenswrapper[4689]: I1210 12:19:34.849277 4689 status_manager.go:851] "Failed to get status for pod" podUID="d51b889b-7485-4c32-84de-3ddfd7ce23e9" pod="openshift-marketplace/community-operators-rkh5c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rkh5c\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:34 crc kubenswrapper[4689]: I1210 12:19:34.849747 4689 status_manager.go:851] "Failed to get status for pod" podUID="ba69b6ca-5550-41d8-be54-d86e80a6aea6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:34 crc kubenswrapper[4689]: I1210 12:19:34.850278 4689 status_manager.go:851] "Failed to get status for pod" podUID="2d87c861-5bb2-4f79-8959-b2cebc28156d" pod="openshift-marketplace/certified-operators-85rm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-85rm9\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:34 crc kubenswrapper[4689]: I1210 12:19:34.850741 4689 status_manager.go:851] "Failed to get status for pod" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" pod="openshift-marketplace/redhat-operators-28k8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-28k8w\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:34 crc kubenswrapper[4689]: I1210 12:19:34.851209 4689 status_manager.go:851] "Failed to get status for pod" podUID="2ebd1829-eb7a-4128-ab9d-79a1f75fa39e" pod="openshift-marketplace/redhat-operators-9hpt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9hpt8\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:34 crc kubenswrapper[4689]: I1210 12:19:34.963123 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-28k8w" Dec 10 12:19:34 crc kubenswrapper[4689]: I1210 12:19:34.963190 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-28k8w" Dec 10 12:19:35 crc kubenswrapper[4689]: I1210 12:19:35.033914 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-28k8w" Dec 10 12:19:35 crc kubenswrapper[4689]: I1210 12:19:35.034810 4689 status_manager.go:851] "Failed to get status for pod" podUID="fd6edc95-9b5d-4438-b427-fd07f62090b7" pod="openshift-marketplace/redhat-marketplace-727gf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-727gf\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:35 crc kubenswrapper[4689]: I1210 12:19:35.035480 4689 status_manager.go:851] "Failed to get status for pod" podUID="d51b889b-7485-4c32-84de-3ddfd7ce23e9" pod="openshift-marketplace/community-operators-rkh5c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rkh5c\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:35 crc kubenswrapper[4689]: I1210 12:19:35.036041 4689 status_manager.go:851] "Failed to get status for pod" podUID="ba69b6ca-5550-41d8-be54-d86e80a6aea6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:35 crc kubenswrapper[4689]: I1210 12:19:35.036551 4689 status_manager.go:851] "Failed to get status for pod" podUID="2d87c861-5bb2-4f79-8959-b2cebc28156d" pod="openshift-marketplace/certified-operators-85rm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-85rm9\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:35 crc kubenswrapper[4689]: I1210 12:19:35.037073 4689 status_manager.go:851] "Failed to get status for pod" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" pod="openshift-marketplace/redhat-operators-28k8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-28k8w\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:35 crc kubenswrapper[4689]: I1210 12:19:35.037541 4689 status_manager.go:851] "Failed to get status for pod" podUID="2ebd1829-eb7a-4128-ab9d-79a1f75fa39e" pod="openshift-marketplace/redhat-operators-9hpt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9hpt8\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:35 crc kubenswrapper[4689]: E1210 12:19:35.837165 4689 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="6.4s" Dec 10 12:19:35 crc kubenswrapper[4689]: I1210 12:19:35.866088 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-28k8w" Dec 10 12:19:35 crc kubenswrapper[4689]: I1210 12:19:35.866908 4689 status_manager.go:851] "Failed to get status for pod" podUID="fd6edc95-9b5d-4438-b427-fd07f62090b7" pod="openshift-marketplace/redhat-marketplace-727gf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-727gf\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:35 crc kubenswrapper[4689]: I1210 12:19:35.867597 4689 status_manager.go:851] "Failed to get status for pod" podUID="d51b889b-7485-4c32-84de-3ddfd7ce23e9" pod="openshift-marketplace/community-operators-rkh5c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rkh5c\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:35 crc kubenswrapper[4689]: I1210 12:19:35.868078 4689 status_manager.go:851] "Failed to get status for pod" podUID="ba69b6ca-5550-41d8-be54-d86e80a6aea6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:35 crc kubenswrapper[4689]: I1210 12:19:35.868551 4689 status_manager.go:851] "Failed to get status for pod" podUID="2d87c861-5bb2-4f79-8959-b2cebc28156d" pod="openshift-marketplace/certified-operators-85rm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-85rm9\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:35 crc kubenswrapper[4689]: I1210 12:19:35.869134 4689 status_manager.go:851] "Failed to get status for pod" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" pod="openshift-marketplace/redhat-operators-28k8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-28k8w\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:35 crc kubenswrapper[4689]: I1210 12:19:35.869567 4689 status_manager.go:851] "Failed to get status for pod" podUID="2ebd1829-eb7a-4128-ab9d-79a1f75fa39e" pod="openshift-marketplace/redhat-operators-9hpt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9hpt8\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:40 crc kubenswrapper[4689]: E1210 12:19:40.479533 4689 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.163:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-28k8w.187fd9e786aab6bd openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-28k8w,UID:f0bf5778-7d94-4e68-8e6f-308482f98351,APIVersion:v1,ResourceVersion:28461,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 12:19:27.372531389 +0000 UTC m=+235.160612537,LastTimestamp:2025-12-10 12:19:27.372531389 +0000 UTC m=+235.160612537,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 12:19:40 crc kubenswrapper[4689]: I1210 12:19:40.835495 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 10 12:19:40 crc kubenswrapper[4689]: I1210 12:19:40.836014 4689 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470" exitCode=1 Dec 10 12:19:40 crc kubenswrapper[4689]: I1210 12:19:40.836064 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470"} Dec 10 12:19:40 crc kubenswrapper[4689]: I1210 12:19:40.836831 4689 scope.go:117] "RemoveContainer" containerID="604acbb51bb890eb420955051c89aa5a8f2005ce63f1fe39c09c35dc3c8b4470" Dec 10 12:19:40 crc kubenswrapper[4689]: I1210 12:19:40.837390 4689 status_manager.go:851] "Failed to get status for pod" podUID="fd6edc95-9b5d-4438-b427-fd07f62090b7" pod="openshift-marketplace/redhat-marketplace-727gf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-727gf\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:40 crc kubenswrapper[4689]: I1210 12:19:40.837848 4689 status_manager.go:851] "Failed to get status for pod" podUID="d51b889b-7485-4c32-84de-3ddfd7ce23e9" pod="openshift-marketplace/community-operators-rkh5c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rkh5c\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:40 crc kubenswrapper[4689]: I1210 12:19:40.838415 4689 status_manager.go:851] "Failed to get status for pod" podUID="ba69b6ca-5550-41d8-be54-d86e80a6aea6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:40 crc kubenswrapper[4689]: I1210 12:19:40.839111 4689 status_manager.go:851] "Failed to get status for pod" podUID="2d87c861-5bb2-4f79-8959-b2cebc28156d" pod="openshift-marketplace/certified-operators-85rm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-85rm9\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:40 crc kubenswrapper[4689]: I1210 12:19:40.839569 4689 status_manager.go:851] "Failed to get status for pod" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" pod="openshift-marketplace/redhat-operators-28k8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-28k8w\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:40 crc kubenswrapper[4689]: I1210 12:19:40.840074 4689 status_manager.go:851] "Failed to get status for pod" podUID="2ebd1829-eb7a-4128-ab9d-79a1f75fa39e" pod="openshift-marketplace/redhat-operators-9hpt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9hpt8\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:40 crc kubenswrapper[4689]: I1210 12:19:40.840654 4689 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:41 crc kubenswrapper[4689]: I1210 12:19:41.497350 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:19:41 crc kubenswrapper[4689]: I1210 12:19:41.498227 4689 status_manager.go:851] "Failed to get status for pod" podUID="fd6edc95-9b5d-4438-b427-fd07f62090b7" pod="openshift-marketplace/redhat-marketplace-727gf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-727gf\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:41 crc kubenswrapper[4689]: I1210 12:19:41.499028 4689 status_manager.go:851] "Failed to get status for pod" podUID="d51b889b-7485-4c32-84de-3ddfd7ce23e9" pod="openshift-marketplace/community-operators-rkh5c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rkh5c\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:41 crc kubenswrapper[4689]: I1210 12:19:41.499649 4689 status_manager.go:851] "Failed to get status for pod" podUID="ba69b6ca-5550-41d8-be54-d86e80a6aea6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:41 crc kubenswrapper[4689]: I1210 12:19:41.500117 4689 status_manager.go:851] "Failed to get status for pod" podUID="2d87c861-5bb2-4f79-8959-b2cebc28156d" pod="openshift-marketplace/certified-operators-85rm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-85rm9\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:41 crc kubenswrapper[4689]: I1210 12:19:41.500625 4689 status_manager.go:851] "Failed to get status for pod" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" pod="openshift-marketplace/redhat-operators-28k8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-28k8w\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:41 crc kubenswrapper[4689]: I1210 12:19:41.501181 4689 status_manager.go:851] "Failed to get status for pod" podUID="2ebd1829-eb7a-4128-ab9d-79a1f75fa39e" pod="openshift-marketplace/redhat-operators-9hpt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9hpt8\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:41 crc kubenswrapper[4689]: I1210 12:19:41.501779 4689 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:41 crc kubenswrapper[4689]: I1210 12:19:41.516622 4689 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="948e2421-6bdf-45d9-b484-3d7cfcbeff5f" Dec 10 12:19:41 crc kubenswrapper[4689]: I1210 12:19:41.516756 4689 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="948e2421-6bdf-45d9-b484-3d7cfcbeff5f" Dec 10 12:19:41 crc kubenswrapper[4689]: E1210 12:19:41.517236 4689 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:19:41 crc kubenswrapper[4689]: I1210 12:19:41.517698 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:19:41 crc kubenswrapper[4689]: W1210 12:19:41.532096 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-b7cc747d4940b3950c7a7323956b8102a067d128257a743b3f7f66b517150789 WatchSource:0}: Error finding container b7cc747d4940b3950c7a7323956b8102a067d128257a743b3f7f66b517150789: Status 404 returned error can't find the container with id b7cc747d4940b3950c7a7323956b8102a067d128257a743b3f7f66b517150789 Dec 10 12:19:41 crc kubenswrapper[4689]: I1210 12:19:41.558734 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" podUID="2e2364cc-104f-4237-9ad5-c121a1c3fba6" containerName="oauth-openshift" containerID="cri-o://ee8f8cc2e8a4c7b84e05bdc36b4ce7838702e7ddd738636ac5333b44b159c594" gracePeriod=15 Dec 10 12:19:41 crc kubenswrapper[4689]: I1210 12:19:41.846287 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b7cc747d4940b3950c7a7323956b8102a067d128257a743b3f7f66b517150789"} Dec 10 12:19:42 crc kubenswrapper[4689]: E1210 12:19:42.238288 4689 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="7s" Dec 10 12:19:42 crc kubenswrapper[4689]: I1210 12:19:42.504886 4689 status_manager.go:851] "Failed to get status for pod" podUID="2ebd1829-eb7a-4128-ab9d-79a1f75fa39e" pod="openshift-marketplace/redhat-operators-9hpt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9hpt8\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:42 crc kubenswrapper[4689]: I1210 12:19:42.505155 4689 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:42 crc kubenswrapper[4689]: I1210 12:19:42.505454 4689 status_manager.go:851] "Failed to get status for pod" podUID="fd6edc95-9b5d-4438-b427-fd07f62090b7" pod="openshift-marketplace/redhat-marketplace-727gf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-727gf\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:42 crc kubenswrapper[4689]: I1210 12:19:42.505799 4689 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:42 crc kubenswrapper[4689]: I1210 12:19:42.506097 4689 status_manager.go:851] "Failed to get status for pod" podUID="d51b889b-7485-4c32-84de-3ddfd7ce23e9" pod="openshift-marketplace/community-operators-rkh5c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rkh5c\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:42 crc kubenswrapper[4689]: I1210 12:19:42.506400 4689 status_manager.go:851] "Failed to get status for pod" podUID="ba69b6ca-5550-41d8-be54-d86e80a6aea6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:42 crc kubenswrapper[4689]: I1210 12:19:42.506602 4689 status_manager.go:851] "Failed to get status for pod" podUID="2d87c861-5bb2-4f79-8959-b2cebc28156d" pod="openshift-marketplace/certified-operators-85rm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-85rm9\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:42 crc kubenswrapper[4689]: I1210 12:19:42.506838 4689 status_manager.go:851] "Failed to get status for pod" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" pod="openshift-marketplace/redhat-operators-28k8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-28k8w\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:43 crc kubenswrapper[4689]: I1210 12:19:43.861697 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 10 12:19:43 crc kubenswrapper[4689]: I1210 12:19:43.862079 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ce1a25e9a3c9252ce47054252e2771ea39cf2facda5a71e05688a72ba1345341"} Dec 10 12:19:44 crc kubenswrapper[4689]: I1210 12:19:44.153353 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 12:19:44 crc kubenswrapper[4689]: I1210 12:19:44.871087 4689 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="2772974cd841a3f5df4750a62e5761f22d2d06472b32c8c33ef395a9ed222ccc" exitCode=0 Dec 10 12:19:44 crc kubenswrapper[4689]: I1210 12:19:44.871191 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"2772974cd841a3f5df4750a62e5761f22d2d06472b32c8c33ef395a9ed222ccc"} Dec 10 12:19:44 crc kubenswrapper[4689]: I1210 12:19:44.871599 4689 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="948e2421-6bdf-45d9-b484-3d7cfcbeff5f" Dec 10 12:19:44 crc kubenswrapper[4689]: I1210 12:19:44.871639 4689 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="948e2421-6bdf-45d9-b484-3d7cfcbeff5f" Dec 10 12:19:44 crc kubenswrapper[4689]: I1210 12:19:44.872059 4689 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:44 crc kubenswrapper[4689]: E1210 12:19:44.872390 4689 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:19:44 crc kubenswrapper[4689]: I1210 12:19:44.872517 4689 status_manager.go:851] "Failed to get status for pod" podUID="fd6edc95-9b5d-4438-b427-fd07f62090b7" pod="openshift-marketplace/redhat-marketplace-727gf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-727gf\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:44 crc kubenswrapper[4689]: I1210 12:19:44.873044 4689 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:44 crc kubenswrapper[4689]: I1210 12:19:44.873492 4689 status_manager.go:851] "Failed to get status for pod" podUID="d51b889b-7485-4c32-84de-3ddfd7ce23e9" pod="openshift-marketplace/community-operators-rkh5c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rkh5c\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:44 crc kubenswrapper[4689]: I1210 12:19:44.873667 4689 generic.go:334] "Generic (PLEG): container finished" podID="2e2364cc-104f-4237-9ad5-c121a1c3fba6" containerID="ee8f8cc2e8a4c7b84e05bdc36b4ce7838702e7ddd738636ac5333b44b159c594" exitCode=0 Dec 10 12:19:44 crc kubenswrapper[4689]: I1210 12:19:44.873796 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" event={"ID":"2e2364cc-104f-4237-9ad5-c121a1c3fba6","Type":"ContainerDied","Data":"ee8f8cc2e8a4c7b84e05bdc36b4ce7838702e7ddd738636ac5333b44b159c594"} Dec 10 12:19:44 crc kubenswrapper[4689]: I1210 12:19:44.873922 4689 status_manager.go:851] "Failed to get status for pod" podUID="ba69b6ca-5550-41d8-be54-d86e80a6aea6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:44 crc kubenswrapper[4689]: I1210 12:19:44.874481 4689 status_manager.go:851] "Failed to get status for pod" podUID="2d87c861-5bb2-4f79-8959-b2cebc28156d" pod="openshift-marketplace/certified-operators-85rm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-85rm9\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:44 crc kubenswrapper[4689]: I1210 12:19:44.875066 4689 status_manager.go:851] "Failed to get status for pod" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" pod="openshift-marketplace/redhat-operators-28k8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-28k8w\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:44 crc kubenswrapper[4689]: I1210 12:19:44.875864 4689 status_manager.go:851] "Failed to get status for pod" podUID="2ebd1829-eb7a-4128-ab9d-79a1f75fa39e" pod="openshift-marketplace/redhat-operators-9hpt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9hpt8\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:44 crc kubenswrapper[4689]: I1210 12:19:44.876683 4689 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:44 crc kubenswrapper[4689]: I1210 12:19:44.877179 4689 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:44 crc kubenswrapper[4689]: I1210 12:19:44.877658 4689 status_manager.go:851] "Failed to get status for pod" podUID="fd6edc95-9b5d-4438-b427-fd07f62090b7" pod="openshift-marketplace/redhat-marketplace-727gf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-727gf\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:44 crc kubenswrapper[4689]: I1210 12:19:44.878145 4689 status_manager.go:851] "Failed to get status for pod" podUID="d51b889b-7485-4c32-84de-3ddfd7ce23e9" pod="openshift-marketplace/community-operators-rkh5c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rkh5c\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:44 crc kubenswrapper[4689]: I1210 12:19:44.878655 4689 status_manager.go:851] "Failed to get status for pod" podUID="ba69b6ca-5550-41d8-be54-d86e80a6aea6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:44 crc kubenswrapper[4689]: I1210 12:19:44.879125 4689 status_manager.go:851] "Failed to get status for pod" podUID="2d87c861-5bb2-4f79-8959-b2cebc28156d" pod="openshift-marketplace/certified-operators-85rm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-85rm9\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:44 crc kubenswrapper[4689]: I1210 12:19:44.879643 4689 status_manager.go:851] "Failed to get status for pod" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" pod="openshift-marketplace/redhat-operators-28k8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-28k8w\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:44 crc kubenswrapper[4689]: I1210 12:19:44.880142 4689 status_manager.go:851] "Failed to get status for pod" podUID="2ebd1829-eb7a-4128-ab9d-79a1f75fa39e" pod="openshift-marketplace/redhat-operators-9hpt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9hpt8\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:46 crc kubenswrapper[4689]: E1210 12:19:46.892576 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:19:46Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:19:46Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:19:46Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T12:19:46Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[],\\\"sizeBytes\\\":1626187079},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:98cd56e57d8c89e59c8ac0d99815cb93378bf6a147e8daaf50bb24e704e676ab\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:b5b85200d1f34b104b43b44b4ef97ebfc425b72cc1cfaf11c46bb3fb7e0f528a\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1216426831},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:26ea35413bef0c078547a03dc093f1d4f15a7d9fc91f05c687b1a437352f0856\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:ba6f8ed6f9b58e63d979cc9729f5ce2c6f22e99b35ce91b902e198eb78d6b106\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1201960779},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1412aa5a552c366a3db6d77ed2b66514a04c3de87179b9eb31c42e6e1dfff68e\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c888c734987a3d160e171aeebeac8b44b9056240fa354958f260a29e70f3d4b7\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1142487363},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:46 crc kubenswrapper[4689]: E1210 12:19:46.893916 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:46 crc kubenswrapper[4689]: E1210 12:19:46.894631 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:46 crc kubenswrapper[4689]: E1210 12:19:46.895182 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:46 crc kubenswrapper[4689]: E1210 12:19:46.895414 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:46 crc kubenswrapper[4689]: E1210 12:19:46.895441 4689 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.163367 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.164507 4689 status_manager.go:851] "Failed to get status for pod" podUID="2d87c861-5bb2-4f79-8959-b2cebc28156d" pod="openshift-marketplace/certified-operators-85rm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-85rm9\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.164893 4689 status_manager.go:851] "Failed to get status for pod" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" pod="openshift-marketplace/redhat-operators-28k8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-28k8w\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.165321 4689 status_manager.go:851] "Failed to get status for pod" podUID="2ebd1829-eb7a-4128-ab9d-79a1f75fa39e" pod="openshift-marketplace/redhat-operators-9hpt8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9hpt8\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.165721 4689 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.166131 4689 status_manager.go:851] "Failed to get status for pod" podUID="fd6edc95-9b5d-4438-b427-fd07f62090b7" pod="openshift-marketplace/redhat-marketplace-727gf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-727gf\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.166509 4689 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.166906 4689 status_manager.go:851] "Failed to get status for pod" podUID="2e2364cc-104f-4237-9ad5-c121a1c3fba6" pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-7jsv6\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.167294 4689 status_manager.go:851] "Failed to get status for pod" podUID="ba69b6ca-5550-41d8-be54-d86e80a6aea6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.167649 4689 status_manager.go:851] "Failed to get status for pod" podUID="d51b889b-7485-4c32-84de-3ddfd7ce23e9" pod="openshift-marketplace/community-operators-rkh5c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rkh5c\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.185038 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2e2364cc-104f-4237-9ad5-c121a1c3fba6-audit-dir\") pod \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.185117 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-user-template-provider-selection\") pod \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.185158 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-cliconfig\") pod \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.185175 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e2364cc-104f-4237-9ad5-c121a1c3fba6-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "2e2364cc-104f-4237-9ad5-c121a1c3fba6" (UID: "2e2364cc-104f-4237-9ad5-c121a1c3fba6"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.185198 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-user-template-login\") pod \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.185233 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-serving-cert\") pod \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.185331 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-router-certs\") pod \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.185374 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-user-template-error\") pod \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.185423 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-user-idp-0-file-data\") pod \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.185468 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-trusted-ca-bundle\") pod \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.185512 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf4w7\" (UniqueName: \"kubernetes.io/projected/2e2364cc-104f-4237-9ad5-c121a1c3fba6-kube-api-access-qf4w7\") pod \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.185542 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-service-ca\") pod \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.185579 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2e2364cc-104f-4237-9ad5-c121a1c3fba6-audit-policies\") pod \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.185620 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-ocp-branding-template\") pod \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.185669 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-session\") pod \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\" (UID: \"2e2364cc-104f-4237-9ad5-c121a1c3fba6\") " Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.186051 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "2e2364cc-104f-4237-9ad5-c121a1c3fba6" (UID: "2e2364cc-104f-4237-9ad5-c121a1c3fba6"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.186073 4689 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2e2364cc-104f-4237-9ad5-c121a1c3fba6-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.191273 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "2e2364cc-104f-4237-9ad5-c121a1c3fba6" (UID: "2e2364cc-104f-4237-9ad5-c121a1c3fba6"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.191693 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "2e2364cc-104f-4237-9ad5-c121a1c3fba6" (UID: "2e2364cc-104f-4237-9ad5-c121a1c3fba6"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.191710 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "2e2364cc-104f-4237-9ad5-c121a1c3fba6" (UID: "2e2364cc-104f-4237-9ad5-c121a1c3fba6"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.191740 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e2364cc-104f-4237-9ad5-c121a1c3fba6-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "2e2364cc-104f-4237-9ad5-c121a1c3fba6" (UID: "2e2364cc-104f-4237-9ad5-c121a1c3fba6"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.192075 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "2e2364cc-104f-4237-9ad5-c121a1c3fba6" (UID: "2e2364cc-104f-4237-9ad5-c121a1c3fba6"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.193181 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "2e2364cc-104f-4237-9ad5-c121a1c3fba6" (UID: "2e2364cc-104f-4237-9ad5-c121a1c3fba6"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.193506 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "2e2364cc-104f-4237-9ad5-c121a1c3fba6" (UID: "2e2364cc-104f-4237-9ad5-c121a1c3fba6"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.194066 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "2e2364cc-104f-4237-9ad5-c121a1c3fba6" (UID: "2e2364cc-104f-4237-9ad5-c121a1c3fba6"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.194401 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e2364cc-104f-4237-9ad5-c121a1c3fba6-kube-api-access-qf4w7" (OuterVolumeSpecName: "kube-api-access-qf4w7") pod "2e2364cc-104f-4237-9ad5-c121a1c3fba6" (UID: "2e2364cc-104f-4237-9ad5-c121a1c3fba6"). InnerVolumeSpecName "kube-api-access-qf4w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.194446 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "2e2364cc-104f-4237-9ad5-c121a1c3fba6" (UID: "2e2364cc-104f-4237-9ad5-c121a1c3fba6"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.195570 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "2e2364cc-104f-4237-9ad5-c121a1c3fba6" (UID: "2e2364cc-104f-4237-9ad5-c121a1c3fba6"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.197242 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "2e2364cc-104f-4237-9ad5-c121a1c3fba6" (UID: "2e2364cc-104f-4237-9ad5-c121a1c3fba6"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.287878 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.287927 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.287947 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.287966 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.288012 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qf4w7\" (UniqueName: \"kubernetes.io/projected/2e2364cc-104f-4237-9ad5-c121a1c3fba6-kube-api-access-qf4w7\") on node \"crc\" DevicePath \"\"" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.288031 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.288052 4689 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2e2364cc-104f-4237-9ad5-c121a1c3fba6-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.288071 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.288088 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.288107 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.288126 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.288147 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.288165 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2e2364cc-104f-4237-9ad5-c121a1c3fba6-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.893635 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" event={"ID":"2e2364cc-104f-4237-9ad5-c121a1c3fba6","Type":"ContainerDied","Data":"c35aa857b9f1323eeddbe3c49a31ec22c4daf3e6d186f3da7e568307139e4aff"} Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.893736 4689 scope.go:117] "RemoveContainer" containerID="ee8f8cc2e8a4c7b84e05bdc36b4ce7838702e7ddd738636ac5333b44b159c594" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.899095 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.906171 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"88fd0253944388e9731a5bd6a9d16888f17837782f12446e14499dc03c535a65"} Dec 10 12:19:47 crc kubenswrapper[4689]: I1210 12:19:47.906212 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"49bc1ba0ea0e3af658a1dc856a65701486fc7d6598a7f1f9aaab23faaa069a29"} Dec 10 12:19:48 crc kubenswrapper[4689]: I1210 12:19:48.003142 4689 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-7jsv6 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.22:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 10 12:19:48 crc kubenswrapper[4689]: I1210 12:19:48.003243 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-7jsv6" podUID="2e2364cc-104f-4237-9ad5-c121a1c3fba6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.22:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 10 12:19:48 crc kubenswrapper[4689]: I1210 12:19:48.916379 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"11e82957ca29b5e6214c81ac93f5e09c9781c7d3ef29698f9fb24e98eaf4ff7c"} Dec 10 12:19:48 crc kubenswrapper[4689]: I1210 12:19:48.916621 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e0022adabf4f6c8677c964537106cce25806d530788631aa7840634646aa457a"} Dec 10 12:19:48 crc kubenswrapper[4689]: I1210 12:19:48.916638 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:19:48 crc kubenswrapper[4689]: I1210 12:19:48.916646 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"962c5a7a194660a281afbf3b79919b3c64e170fccec4bcd7ac546a3b43d1c0fb"} Dec 10 12:19:48 crc kubenswrapper[4689]: I1210 12:19:48.916594 4689 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="948e2421-6bdf-45d9-b484-3d7cfcbeff5f" Dec 10 12:19:48 crc kubenswrapper[4689]: I1210 12:19:48.916661 4689 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="948e2421-6bdf-45d9-b484-3d7cfcbeff5f" Dec 10 12:19:49 crc kubenswrapper[4689]: I1210 12:19:49.316270 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 12:19:51 crc kubenswrapper[4689]: I1210 12:19:51.518634 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:19:51 crc kubenswrapper[4689]: I1210 12:19:51.518960 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:19:51 crc kubenswrapper[4689]: I1210 12:19:51.528909 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:19:53 crc kubenswrapper[4689]: I1210 12:19:53.926634 4689 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:19:53 crc kubenswrapper[4689]: I1210 12:19:53.946646 4689 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="948e2421-6bdf-45d9-b484-3d7cfcbeff5f" Dec 10 12:19:53 crc kubenswrapper[4689]: I1210 12:19:53.946677 4689 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="948e2421-6bdf-45d9-b484-3d7cfcbeff5f" Dec 10 12:19:53 crc kubenswrapper[4689]: I1210 12:19:53.950460 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:19:54 crc kubenswrapper[4689]: I1210 12:19:54.011653 4689 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5472b731-6b5f-488c-8eaa-76aa812e5494" Dec 10 12:19:54 crc kubenswrapper[4689]: I1210 12:19:54.153835 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 12:19:54 crc kubenswrapper[4689]: I1210 12:19:54.163449 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 12:19:54 crc kubenswrapper[4689]: I1210 12:19:54.954289 4689 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="948e2421-6bdf-45d9-b484-3d7cfcbeff5f" Dec 10 12:19:54 crc kubenswrapper[4689]: I1210 12:19:54.954351 4689 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="948e2421-6bdf-45d9-b484-3d7cfcbeff5f" Dec 10 12:19:54 crc kubenswrapper[4689]: I1210 12:19:54.961072 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 12:20:01 crc kubenswrapper[4689]: I1210 12:20:01.524779 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 12:20:01 crc kubenswrapper[4689]: I1210 12:20:01.526096 4689 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="948e2421-6bdf-45d9-b484-3d7cfcbeff5f" Dec 10 12:20:01 crc kubenswrapper[4689]: I1210 12:20:01.526125 4689 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="948e2421-6bdf-45d9-b484-3d7cfcbeff5f" Dec 10 12:20:02 crc kubenswrapper[4689]: I1210 12:20:02.517686 4689 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5472b731-6b5f-488c-8eaa-76aa812e5494" Dec 10 12:20:04 crc kubenswrapper[4689]: I1210 12:20:04.279825 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 10 12:20:06 crc kubenswrapper[4689]: I1210 12:20:06.197381 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 10 12:20:06 crc kubenswrapper[4689]: I1210 12:20:06.428298 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 10 12:20:06 crc kubenswrapper[4689]: I1210 12:20:06.630116 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 10 12:20:06 crc kubenswrapper[4689]: I1210 12:20:06.655683 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 10 12:20:06 crc kubenswrapper[4689]: I1210 12:20:06.657522 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 10 12:20:06 crc kubenswrapper[4689]: I1210 12:20:06.814812 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 10 12:20:06 crc kubenswrapper[4689]: I1210 12:20:06.815143 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 10 12:20:07 crc kubenswrapper[4689]: I1210 12:20:07.091886 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 10 12:20:07 crc kubenswrapper[4689]: I1210 12:20:07.176337 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 10 12:20:07 crc kubenswrapper[4689]: I1210 12:20:07.214148 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 10 12:20:07 crc kubenswrapper[4689]: I1210 12:20:07.314377 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 10 12:20:07 crc kubenswrapper[4689]: I1210 12:20:07.422009 4689 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 10 12:20:07 crc kubenswrapper[4689]: I1210 12:20:07.423823 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-28k8w" podStartSLOduration=42.026363003 podStartE2EDuration="2m3.423793569s" podCreationTimestamp="2025-12-10 12:18:04 +0000 UTC" firstStartedPulling="2025-12-10 12:18:05.851045539 +0000 UTC m=+153.639126677" lastFinishedPulling="2025-12-10 12:19:27.248476085 +0000 UTC m=+235.036557243" observedRunningTime="2025-12-10 12:19:53.967136562 +0000 UTC m=+261.755217710" watchObservedRunningTime="2025-12-10 12:20:07.423793569 +0000 UTC m=+275.211874757" Dec 10 12:20:07 crc kubenswrapper[4689]: I1210 12:20:07.431340 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7jsv6","openshift-kube-apiserver/kube-apiserver-crc"] Dec 10 12:20:07 crc kubenswrapper[4689]: I1210 12:20:07.431429 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 10 12:20:07 crc kubenswrapper[4689]: I1210 12:20:07.456720 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.456695579 podStartE2EDuration="14.456695579s" podCreationTimestamp="2025-12-10 12:19:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:20:07.452700596 +0000 UTC m=+275.240781754" watchObservedRunningTime="2025-12-10 12:20:07.456695579 +0000 UTC m=+275.244776757" Dec 10 12:20:07 crc kubenswrapper[4689]: I1210 12:20:07.580057 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 10 12:20:07 crc kubenswrapper[4689]: I1210 12:20:07.621606 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 10 12:20:07 crc kubenswrapper[4689]: I1210 12:20:07.622218 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 10 12:20:07 crc kubenswrapper[4689]: I1210 12:20:07.644157 4689 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 10 12:20:07 crc kubenswrapper[4689]: I1210 12:20:07.720911 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 10 12:20:07 crc kubenswrapper[4689]: I1210 12:20:07.844371 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 10 12:20:07 crc kubenswrapper[4689]: I1210 12:20:07.852426 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 10 12:20:07 crc kubenswrapper[4689]: I1210 12:20:07.925745 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 10 12:20:08 crc kubenswrapper[4689]: I1210 12:20:08.018517 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 10 12:20:08 crc kubenswrapper[4689]: I1210 12:20:08.148267 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 10 12:20:08 crc kubenswrapper[4689]: I1210 12:20:08.209757 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 10 12:20:08 crc kubenswrapper[4689]: I1210 12:20:08.214413 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 10 12:20:08 crc kubenswrapper[4689]: I1210 12:20:08.488254 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 10 12:20:08 crc kubenswrapper[4689]: I1210 12:20:08.506476 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e2364cc-104f-4237-9ad5-c121a1c3fba6" path="/var/lib/kubelet/pods/2e2364cc-104f-4237-9ad5-c121a1c3fba6/volumes" Dec 10 12:20:08 crc kubenswrapper[4689]: I1210 12:20:08.670447 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 10 12:20:08 crc kubenswrapper[4689]: I1210 12:20:08.690198 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 10 12:20:08 crc kubenswrapper[4689]: I1210 12:20:08.942331 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 10 12:20:08 crc kubenswrapper[4689]: I1210 12:20:08.954525 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 10 12:20:09 crc kubenswrapper[4689]: I1210 12:20:09.132211 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 10 12:20:09 crc kubenswrapper[4689]: I1210 12:20:09.167950 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 10 12:20:09 crc kubenswrapper[4689]: I1210 12:20:09.238264 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 10 12:20:09 crc kubenswrapper[4689]: I1210 12:20:09.306193 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 10 12:20:09 crc kubenswrapper[4689]: I1210 12:20:09.457198 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 10 12:20:09 crc kubenswrapper[4689]: I1210 12:20:09.700671 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 10 12:20:09 crc kubenswrapper[4689]: I1210 12:20:09.704387 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 10 12:20:09 crc kubenswrapper[4689]: I1210 12:20:09.862047 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 10 12:20:09 crc kubenswrapper[4689]: I1210 12:20:09.906723 4689 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 10 12:20:10 crc kubenswrapper[4689]: I1210 12:20:10.169254 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 10 12:20:10 crc kubenswrapper[4689]: I1210 12:20:10.220028 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 10 12:20:10 crc kubenswrapper[4689]: I1210 12:20:10.311674 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 10 12:20:10 crc kubenswrapper[4689]: I1210 12:20:10.402728 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 10 12:20:10 crc kubenswrapper[4689]: I1210 12:20:10.622785 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 10 12:20:10 crc kubenswrapper[4689]: I1210 12:20:10.643327 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 10 12:20:10 crc kubenswrapper[4689]: I1210 12:20:10.662021 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 10 12:20:10 crc kubenswrapper[4689]: I1210 12:20:10.664068 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 10 12:20:11 crc kubenswrapper[4689]: I1210 12:20:11.105236 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 10 12:20:11 crc kubenswrapper[4689]: I1210 12:20:11.129296 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 10 12:20:11 crc kubenswrapper[4689]: I1210 12:20:11.169918 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 10 12:20:11 crc kubenswrapper[4689]: I1210 12:20:11.172253 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 10 12:20:11 crc kubenswrapper[4689]: I1210 12:20:11.298396 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 10 12:20:11 crc kubenswrapper[4689]: I1210 12:20:11.616949 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 10 12:20:11 crc kubenswrapper[4689]: I1210 12:20:11.660048 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 10 12:20:11 crc kubenswrapper[4689]: I1210 12:20:11.669606 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 10 12:20:11 crc kubenswrapper[4689]: I1210 12:20:11.855299 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 10 12:20:11 crc kubenswrapper[4689]: I1210 12:20:11.971993 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 10 12:20:12 crc kubenswrapper[4689]: I1210 12:20:12.082281 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 10 12:20:12 crc kubenswrapper[4689]: I1210 12:20:12.309535 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 10 12:20:12 crc kubenswrapper[4689]: I1210 12:20:12.528826 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 10 12:20:12 crc kubenswrapper[4689]: I1210 12:20:12.604100 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 10 12:20:12 crc kubenswrapper[4689]: I1210 12:20:12.886551 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 10 12:20:13 crc kubenswrapper[4689]: I1210 12:20:13.035082 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 10 12:20:13 crc kubenswrapper[4689]: I1210 12:20:13.106316 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 10 12:20:13 crc kubenswrapper[4689]: I1210 12:20:13.117694 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 10 12:20:13 crc kubenswrapper[4689]: I1210 12:20:13.268490 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 10 12:20:13 crc kubenswrapper[4689]: I1210 12:20:13.411703 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 10 12:20:13 crc kubenswrapper[4689]: I1210 12:20:13.562300 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 10 12:20:13 crc kubenswrapper[4689]: I1210 12:20:13.624586 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 10 12:20:13 crc kubenswrapper[4689]: I1210 12:20:13.901283 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 10 12:20:14 crc kubenswrapper[4689]: I1210 12:20:14.086764 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 10 12:20:14 crc kubenswrapper[4689]: I1210 12:20:14.241679 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 10 12:20:14 crc kubenswrapper[4689]: I1210 12:20:14.418791 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 10 12:20:14 crc kubenswrapper[4689]: I1210 12:20:14.451645 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 10 12:20:14 crc kubenswrapper[4689]: I1210 12:20:14.671740 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 10 12:20:14 crc kubenswrapper[4689]: I1210 12:20:14.786965 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 10 12:20:14 crc kubenswrapper[4689]: I1210 12:20:14.818193 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 10 12:20:14 crc kubenswrapper[4689]: I1210 12:20:14.886542 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 10 12:20:14 crc kubenswrapper[4689]: I1210 12:20:14.985180 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 10 12:20:15 crc kubenswrapper[4689]: I1210 12:20:15.016920 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 10 12:20:15 crc kubenswrapper[4689]: I1210 12:20:15.294909 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 10 12:20:15 crc kubenswrapper[4689]: I1210 12:20:15.451192 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 10 12:20:15 crc kubenswrapper[4689]: I1210 12:20:15.462604 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 10 12:20:15 crc kubenswrapper[4689]: I1210 12:20:15.493240 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 10 12:20:15 crc kubenswrapper[4689]: I1210 12:20:15.533476 4689 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 10 12:20:15 crc kubenswrapper[4689]: I1210 12:20:15.577956 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 10 12:20:15 crc kubenswrapper[4689]: I1210 12:20:15.636158 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 10 12:20:15 crc kubenswrapper[4689]: I1210 12:20:15.785878 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 10 12:20:16 crc kubenswrapper[4689]: I1210 12:20:16.026198 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 10 12:20:16 crc kubenswrapper[4689]: I1210 12:20:16.026481 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 10 12:20:16 crc kubenswrapper[4689]: I1210 12:20:16.063103 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 10 12:20:16 crc kubenswrapper[4689]: I1210 12:20:16.189033 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 10 12:20:16 crc kubenswrapper[4689]: I1210 12:20:16.455125 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 10 12:20:16 crc kubenswrapper[4689]: I1210 12:20:16.615188 4689 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 10 12:20:16 crc kubenswrapper[4689]: I1210 12:20:16.615610 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://dea90fc483f52e820a200ffba777e0efd8adeeaf9b8a1f5ee86dd084a8d237a7" gracePeriod=5 Dec 10 12:20:16 crc kubenswrapper[4689]: I1210 12:20:16.793707 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 10 12:20:16 crc kubenswrapper[4689]: I1210 12:20:16.794545 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 10 12:20:17 crc kubenswrapper[4689]: I1210 12:20:17.102313 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 10 12:20:17 crc kubenswrapper[4689]: I1210 12:20:17.284943 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 10 12:20:17 crc kubenswrapper[4689]: I1210 12:20:17.360317 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 10 12:20:17 crc kubenswrapper[4689]: I1210 12:20:17.479060 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 10 12:20:17 crc kubenswrapper[4689]: I1210 12:20:17.567379 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 10 12:20:17 crc kubenswrapper[4689]: I1210 12:20:17.781325 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 10 12:20:17 crc kubenswrapper[4689]: I1210 12:20:17.845203 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 10 12:20:17 crc kubenswrapper[4689]: I1210 12:20:17.915644 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 10 12:20:17 crc kubenswrapper[4689]: I1210 12:20:17.939549 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 10 12:20:18 crc kubenswrapper[4689]: I1210 12:20:18.013321 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 10 12:20:18 crc kubenswrapper[4689]: I1210 12:20:18.105335 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 10 12:20:18 crc kubenswrapper[4689]: I1210 12:20:18.319876 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 10 12:20:18 crc kubenswrapper[4689]: I1210 12:20:18.473294 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 10 12:20:18 crc kubenswrapper[4689]: I1210 12:20:18.642757 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 10 12:20:18 crc kubenswrapper[4689]: I1210 12:20:18.677536 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 10 12:20:18 crc kubenswrapper[4689]: I1210 12:20:18.719454 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 10 12:20:18 crc kubenswrapper[4689]: I1210 12:20:18.947254 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 10 12:20:19 crc kubenswrapper[4689]: I1210 12:20:19.047720 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 10 12:20:19 crc kubenswrapper[4689]: I1210 12:20:19.269125 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 10 12:20:19 crc kubenswrapper[4689]: I1210 12:20:19.291966 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 10 12:20:19 crc kubenswrapper[4689]: I1210 12:20:19.427870 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 10 12:20:19 crc kubenswrapper[4689]: I1210 12:20:19.714936 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 10 12:20:22 crc kubenswrapper[4689]: I1210 12:20:22.136818 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 10 12:20:22 crc kubenswrapper[4689]: I1210 12:20:22.137297 4689 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="dea90fc483f52e820a200ffba777e0efd8adeeaf9b8a1f5ee86dd084a8d237a7" exitCode=137 Dec 10 12:20:22 crc kubenswrapper[4689]: I1210 12:20:22.246375 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 10 12:20:22 crc kubenswrapper[4689]: I1210 12:20:22.246456 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 12:20:22 crc kubenswrapper[4689]: I1210 12:20:22.364594 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 10 12:20:22 crc kubenswrapper[4689]: I1210 12:20:22.365106 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 10 12:20:22 crc kubenswrapper[4689]: I1210 12:20:22.364780 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:20:22 crc kubenswrapper[4689]: I1210 12:20:22.365174 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:20:22 crc kubenswrapper[4689]: I1210 12:20:22.365469 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 10 12:20:22 crc kubenswrapper[4689]: I1210 12:20:22.365629 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 10 12:20:22 crc kubenswrapper[4689]: I1210 12:20:22.365701 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 10 12:20:22 crc kubenswrapper[4689]: I1210 12:20:22.365811 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:20:22 crc kubenswrapper[4689]: I1210 12:20:22.365897 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:20:22 crc kubenswrapper[4689]: I1210 12:20:22.366180 4689 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 10 12:20:22 crc kubenswrapper[4689]: I1210 12:20:22.366220 4689 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 10 12:20:22 crc kubenswrapper[4689]: I1210 12:20:22.366245 4689 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 10 12:20:22 crc kubenswrapper[4689]: I1210 12:20:22.366271 4689 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 10 12:20:22 crc kubenswrapper[4689]: I1210 12:20:22.376847 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:20:22 crc kubenswrapper[4689]: I1210 12:20:22.467701 4689 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 10 12:20:22 crc kubenswrapper[4689]: I1210 12:20:22.511798 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 10 12:20:23 crc kubenswrapper[4689]: I1210 12:20:23.144641 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 10 12:20:23 crc kubenswrapper[4689]: I1210 12:20:23.144721 4689 scope.go:117] "RemoveContainer" containerID="dea90fc483f52e820a200ffba777e0efd8adeeaf9b8a1f5ee86dd084a8d237a7" Dec 10 12:20:23 crc kubenswrapper[4689]: I1210 12:20:23.144852 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 12:20:26 crc kubenswrapper[4689]: I1210 12:20:26.679849 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 10 12:20:29 crc kubenswrapper[4689]: I1210 12:20:29.377700 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 10 12:20:31 crc kubenswrapper[4689]: I1210 12:20:31.501102 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 10 12:20:31 crc kubenswrapper[4689]: I1210 12:20:31.525441 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 10 12:20:31 crc kubenswrapper[4689]: I1210 12:20:31.905324 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 10 12:20:32 crc kubenswrapper[4689]: I1210 12:20:32.363700 4689 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Dec 10 12:20:33 crc kubenswrapper[4689]: I1210 12:20:33.300755 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 10 12:20:33 crc kubenswrapper[4689]: I1210 12:20:33.398099 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 10 12:20:33 crc kubenswrapper[4689]: I1210 12:20:33.470104 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 10 12:20:33 crc kubenswrapper[4689]: I1210 12:20:33.742875 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 10 12:20:33 crc kubenswrapper[4689]: I1210 12:20:33.850325 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 10 12:20:34 crc kubenswrapper[4689]: I1210 12:20:34.226515 4689 generic.go:334] "Generic (PLEG): container finished" podID="88d80bbe-a9ab-4c91-b0b2-485e106dd150" containerID="959b0b1179d522d266cd5353b9313a9bb5dd0f748cb8f893bc5802f46f7896b8" exitCode=0 Dec 10 12:20:34 crc kubenswrapper[4689]: I1210 12:20:34.226578 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9wlmb" event={"ID":"88d80bbe-a9ab-4c91-b0b2-485e106dd150","Type":"ContainerDied","Data":"959b0b1179d522d266cd5353b9313a9bb5dd0f748cb8f893bc5802f46f7896b8"} Dec 10 12:20:34 crc kubenswrapper[4689]: I1210 12:20:34.227310 4689 scope.go:117] "RemoveContainer" containerID="959b0b1179d522d266cd5353b9313a9bb5dd0f748cb8f893bc5802f46f7896b8" Dec 10 12:20:34 crc kubenswrapper[4689]: I1210 12:20:34.304317 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 10 12:20:34 crc kubenswrapper[4689]: I1210 12:20:34.471766 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 10 12:20:35 crc kubenswrapper[4689]: I1210 12:20:35.237424 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9wlmb" event={"ID":"88d80bbe-a9ab-4c91-b0b2-485e106dd150","Type":"ContainerStarted","Data":"5f2b487045cdea0bc6cfe8786a69403e961b1587cd97296161234fa4626b14ff"} Dec 10 12:20:35 crc kubenswrapper[4689]: I1210 12:20:35.237954 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-9wlmb" Dec 10 12:20:35 crc kubenswrapper[4689]: I1210 12:20:35.239632 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-9wlmb" Dec 10 12:20:35 crc kubenswrapper[4689]: I1210 12:20:35.453521 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 10 12:20:35 crc kubenswrapper[4689]: I1210 12:20:35.507431 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 10 12:20:35 crc kubenswrapper[4689]: I1210 12:20:35.699322 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 10 12:20:35 crc kubenswrapper[4689]: I1210 12:20:35.764753 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 10 12:20:35 crc kubenswrapper[4689]: I1210 12:20:35.805609 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 10 12:20:36 crc kubenswrapper[4689]: I1210 12:20:36.223646 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 10 12:20:36 crc kubenswrapper[4689]: I1210 12:20:36.237992 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 10 12:20:36 crc kubenswrapper[4689]: I1210 12:20:36.271648 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 10 12:20:36 crc kubenswrapper[4689]: I1210 12:20:36.284167 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 10 12:20:36 crc kubenswrapper[4689]: I1210 12:20:36.616390 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 10 12:20:36 crc kubenswrapper[4689]: I1210 12:20:36.686336 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 10 12:20:37 crc kubenswrapper[4689]: I1210 12:20:37.085154 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 10 12:20:37 crc kubenswrapper[4689]: I1210 12:20:37.109929 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 10 12:20:37 crc kubenswrapper[4689]: I1210 12:20:37.191882 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 10 12:20:37 crc kubenswrapper[4689]: I1210 12:20:37.411006 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 10 12:20:37 crc kubenswrapper[4689]: I1210 12:20:37.640043 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 10 12:20:38 crc kubenswrapper[4689]: I1210 12:20:38.051285 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 10 12:20:38 crc kubenswrapper[4689]: I1210 12:20:38.244210 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 10 12:20:38 crc kubenswrapper[4689]: I1210 12:20:38.282774 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 10 12:20:38 crc kubenswrapper[4689]: I1210 12:20:38.402157 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 10 12:20:38 crc kubenswrapper[4689]: I1210 12:20:38.466376 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 10 12:20:38 crc kubenswrapper[4689]: I1210 12:20:38.622528 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 10 12:20:38 crc kubenswrapper[4689]: I1210 12:20:38.699289 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.128306 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-f4f7798bf-m7frn"] Dec 10 12:20:39 crc kubenswrapper[4689]: E1210 12:20:39.128524 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e2364cc-104f-4237-9ad5-c121a1c3fba6" containerName="oauth-openshift" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.128540 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e2364cc-104f-4237-9ad5-c121a1c3fba6" containerName="oauth-openshift" Dec 10 12:20:39 crc kubenswrapper[4689]: E1210 12:20:39.128556 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.128566 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 10 12:20:39 crc kubenswrapper[4689]: E1210 12:20:39.128588 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba69b6ca-5550-41d8-be54-d86e80a6aea6" containerName="installer" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.128596 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba69b6ca-5550-41d8-be54-d86e80a6aea6" containerName="installer" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.128706 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.128721 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba69b6ca-5550-41d8-be54-d86e80a6aea6" containerName="installer" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.128730 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e2364cc-104f-4237-9ad5-c121a1c3fba6" containerName="oauth-openshift" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.129155 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.132511 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.133385 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.134538 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.135596 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.135661 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.135659 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.135610 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.135822 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.136071 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.136298 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.136426 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.143588 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.145287 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.151307 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.157443 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.157914 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.205239 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.205400 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-user-template-login\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.205488 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.205598 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-system-router-certs\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.205634 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.205658 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-audit-dir\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.205686 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.205795 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-audit-policies\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.205931 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps679\" (UniqueName: \"kubernetes.io/projected/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-kube-api-access-ps679\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.206008 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-system-service-ca\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.206066 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.206115 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-user-template-error\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.206208 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.206233 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-system-session\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.307898 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-system-router-certs\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.307990 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.308019 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-audit-dir\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.308046 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.308076 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-audit-policies\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.308111 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps679\" (UniqueName: \"kubernetes.io/projected/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-kube-api-access-ps679\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.308144 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-system-service-ca\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.308167 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.308190 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-user-template-error\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.308179 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-audit-dir\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.308227 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-system-session\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.308397 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.308511 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.308566 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-user-template-login\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.308630 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.309273 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-audit-policies\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.309357 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.309450 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-system-service-ca\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.310384 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.314315 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.315386 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.316187 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-user-template-error\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.316326 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.316546 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-user-template-login\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.317515 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.319734 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-system-router-certs\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.332352 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps679\" (UniqueName: \"kubernetes.io/projected/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-kube-api-access-ps679\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.333615 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a82874c6-baaf-4ecb-a5dd-57d6364ff8f0-v4-0-config-system-session\") pod \"oauth-openshift-f4f7798bf-m7frn\" (UID: \"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0\") " pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.448438 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.570452 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.606087 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 10 12:20:39 crc kubenswrapper[4689]: I1210 12:20:39.975950 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 10 12:20:40 crc kubenswrapper[4689]: I1210 12:20:40.099290 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 10 12:20:40 crc kubenswrapper[4689]: I1210 12:20:40.220418 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 10 12:20:40 crc kubenswrapper[4689]: I1210 12:20:40.815666 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 10 12:20:40 crc kubenswrapper[4689]: I1210 12:20:40.989391 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 10 12:20:41 crc kubenswrapper[4689]: I1210 12:20:41.010530 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 10 12:20:41 crc kubenswrapper[4689]: I1210 12:20:41.312590 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 10 12:20:41 crc kubenswrapper[4689]: I1210 12:20:41.314851 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 10 12:20:41 crc kubenswrapper[4689]: I1210 12:20:41.369540 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 10 12:20:41 crc kubenswrapper[4689]: I1210 12:20:41.584239 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 10 12:20:41 crc kubenswrapper[4689]: I1210 12:20:41.741099 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 10 12:20:42 crc kubenswrapper[4689]: I1210 12:20:42.244395 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-htx2z"] Dec 10 12:20:42 crc kubenswrapper[4689]: I1210 12:20:42.244589 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-htx2z" podUID="fd353a2b-c325-44b6-9e25-6a4c39213f9e" containerName="controller-manager" containerID="cri-o://643d7e2010835708ae436decbcb29be3da0eb85eab397559f4e36fc3a3bb1aaa" gracePeriod=30 Dec 10 12:20:42 crc kubenswrapper[4689]: I1210 12:20:42.377630 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sfzm5"] Dec 10 12:20:42 crc kubenswrapper[4689]: I1210 12:20:42.377872 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sfzm5" podUID="d8990fd9-cdfb-4c98-80b3-794f1b371ee5" containerName="route-controller-manager" containerID="cri-o://e32378ecfee705d280f3a9f3c8354d53b86c847258469a7da78722ad7f803b0f" gracePeriod=30 Dec 10 12:20:42 crc kubenswrapper[4689]: I1210 12:20:42.473283 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 10 12:20:42 crc kubenswrapper[4689]: I1210 12:20:42.503551 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 10 12:20:42 crc kubenswrapper[4689]: I1210 12:20:42.619953 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 10 12:20:42 crc kubenswrapper[4689]: I1210 12:20:42.815030 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 10 12:20:42 crc kubenswrapper[4689]: I1210 12:20:42.828530 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 10 12:20:42 crc kubenswrapper[4689]: I1210 12:20:42.844467 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sfzm5" Dec 10 12:20:42 crc kubenswrapper[4689]: I1210 12:20:42.964560 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8990fd9-cdfb-4c98-80b3-794f1b371ee5-config\") pod \"d8990fd9-cdfb-4c98-80b3-794f1b371ee5\" (UID: \"d8990fd9-cdfb-4c98-80b3-794f1b371ee5\") " Dec 10 12:20:42 crc kubenswrapper[4689]: I1210 12:20:42.964592 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8990fd9-cdfb-4c98-80b3-794f1b371ee5-client-ca\") pod \"d8990fd9-cdfb-4c98-80b3-794f1b371ee5\" (UID: \"d8990fd9-cdfb-4c98-80b3-794f1b371ee5\") " Dec 10 12:20:42 crc kubenswrapper[4689]: I1210 12:20:42.964622 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8990fd9-cdfb-4c98-80b3-794f1b371ee5-serving-cert\") pod \"d8990fd9-cdfb-4c98-80b3-794f1b371ee5\" (UID: \"d8990fd9-cdfb-4c98-80b3-794f1b371ee5\") " Dec 10 12:20:42 crc kubenswrapper[4689]: I1210 12:20:42.964674 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxwpq\" (UniqueName: \"kubernetes.io/projected/d8990fd9-cdfb-4c98-80b3-794f1b371ee5-kube-api-access-kxwpq\") pod \"d8990fd9-cdfb-4c98-80b3-794f1b371ee5\" (UID: \"d8990fd9-cdfb-4c98-80b3-794f1b371ee5\") " Dec 10 12:20:42 crc kubenswrapper[4689]: I1210 12:20:42.966321 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8990fd9-cdfb-4c98-80b3-794f1b371ee5-client-ca" (OuterVolumeSpecName: "client-ca") pod "d8990fd9-cdfb-4c98-80b3-794f1b371ee5" (UID: "d8990fd9-cdfb-4c98-80b3-794f1b371ee5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:20:42 crc kubenswrapper[4689]: I1210 12:20:42.966478 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8990fd9-cdfb-4c98-80b3-794f1b371ee5-config" (OuterVolumeSpecName: "config") pod "d8990fd9-cdfb-4c98-80b3-794f1b371ee5" (UID: "d8990fd9-cdfb-4c98-80b3-794f1b371ee5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:20:42 crc kubenswrapper[4689]: I1210 12:20:42.987400 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8990fd9-cdfb-4c98-80b3-794f1b371ee5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d8990fd9-cdfb-4c98-80b3-794f1b371ee5" (UID: "d8990fd9-cdfb-4c98-80b3-794f1b371ee5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:20:42 crc kubenswrapper[4689]: I1210 12:20:42.989478 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8990fd9-cdfb-4c98-80b3-794f1b371ee5-kube-api-access-kxwpq" (OuterVolumeSpecName: "kube-api-access-kxwpq") pod "d8990fd9-cdfb-4c98-80b3-794f1b371ee5" (UID: "d8990fd9-cdfb-4c98-80b3-794f1b371ee5"). InnerVolumeSpecName "kube-api-access-kxwpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.060707 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-htx2z" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.066216 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8990fd9-cdfb-4c98-80b3-794f1b371ee5-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.066253 4689 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8990fd9-cdfb-4c98-80b3-794f1b371ee5-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.066265 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8990fd9-cdfb-4c98-80b3-794f1b371ee5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.066311 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxwpq\" (UniqueName: \"kubernetes.io/projected/d8990fd9-cdfb-4c98-80b3-794f1b371ee5-kube-api-access-kxwpq\") on node \"crc\" DevicePath \"\"" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.166908 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxbh6\" (UniqueName: \"kubernetes.io/projected/fd353a2b-c325-44b6-9e25-6a4c39213f9e-kube-api-access-lxbh6\") pod \"fd353a2b-c325-44b6-9e25-6a4c39213f9e\" (UID: \"fd353a2b-c325-44b6-9e25-6a4c39213f9e\") " Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.167110 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd353a2b-c325-44b6-9e25-6a4c39213f9e-serving-cert\") pod \"fd353a2b-c325-44b6-9e25-6a4c39213f9e\" (UID: \"fd353a2b-c325-44b6-9e25-6a4c39213f9e\") " Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.167163 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd353a2b-c325-44b6-9e25-6a4c39213f9e-client-ca\") pod \"fd353a2b-c325-44b6-9e25-6a4c39213f9e\" (UID: \"fd353a2b-c325-44b6-9e25-6a4c39213f9e\") " Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.167224 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd353a2b-c325-44b6-9e25-6a4c39213f9e-proxy-ca-bundles\") pod \"fd353a2b-c325-44b6-9e25-6a4c39213f9e\" (UID: \"fd353a2b-c325-44b6-9e25-6a4c39213f9e\") " Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.168266 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd353a2b-c325-44b6-9e25-6a4c39213f9e-client-ca" (OuterVolumeSpecName: "client-ca") pod "fd353a2b-c325-44b6-9e25-6a4c39213f9e" (UID: "fd353a2b-c325-44b6-9e25-6a4c39213f9e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.168288 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd353a2b-c325-44b6-9e25-6a4c39213f9e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "fd353a2b-c325-44b6-9e25-6a4c39213f9e" (UID: "fd353a2b-c325-44b6-9e25-6a4c39213f9e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.169068 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd353a2b-c325-44b6-9e25-6a4c39213f9e-config\") pod \"fd353a2b-c325-44b6-9e25-6a4c39213f9e\" (UID: \"fd353a2b-c325-44b6-9e25-6a4c39213f9e\") " Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.169637 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd353a2b-c325-44b6-9e25-6a4c39213f9e-config" (OuterVolumeSpecName: "config") pod "fd353a2b-c325-44b6-9e25-6a4c39213f9e" (UID: "fd353a2b-c325-44b6-9e25-6a4c39213f9e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.170069 4689 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd353a2b-c325-44b6-9e25-6a4c39213f9e-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.170082 4689 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd353a2b-c325-44b6-9e25-6a4c39213f9e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.170094 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd353a2b-c325-44b6-9e25-6a4c39213f9e-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.170386 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd353a2b-c325-44b6-9e25-6a4c39213f9e-kube-api-access-lxbh6" (OuterVolumeSpecName: "kube-api-access-lxbh6") pod "fd353a2b-c325-44b6-9e25-6a4c39213f9e" (UID: "fd353a2b-c325-44b6-9e25-6a4c39213f9e"). InnerVolumeSpecName "kube-api-access-lxbh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.172037 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd353a2b-c325-44b6-9e25-6a4c39213f9e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fd353a2b-c325-44b6-9e25-6a4c39213f9e" (UID: "fd353a2b-c325-44b6-9e25-6a4c39213f9e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.257053 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.271175 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxbh6\" (UniqueName: \"kubernetes.io/projected/fd353a2b-c325-44b6-9e25-6a4c39213f9e-kube-api-access-lxbh6\") on node \"crc\" DevicePath \"\"" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.271202 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd353a2b-c325-44b6-9e25-6a4c39213f9e-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.295912 4689 generic.go:334] "Generic (PLEG): container finished" podID="fd353a2b-c325-44b6-9e25-6a4c39213f9e" containerID="643d7e2010835708ae436decbcb29be3da0eb85eab397559f4e36fc3a3bb1aaa" exitCode=0 Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.296048 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-htx2z" event={"ID":"fd353a2b-c325-44b6-9e25-6a4c39213f9e","Type":"ContainerDied","Data":"643d7e2010835708ae436decbcb29be3da0eb85eab397559f4e36fc3a3bb1aaa"} Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.296080 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-htx2z" event={"ID":"fd353a2b-c325-44b6-9e25-6a4c39213f9e","Type":"ContainerDied","Data":"d4a3793dfe44be1c347597ab7400980cf4ffd89560f680c77bb6b99443b846c9"} Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.296112 4689 scope.go:117] "RemoveContainer" containerID="643d7e2010835708ae436decbcb29be3da0eb85eab397559f4e36fc3a3bb1aaa" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.296304 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-htx2z" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.297958 4689 generic.go:334] "Generic (PLEG): container finished" podID="d8990fd9-cdfb-4c98-80b3-794f1b371ee5" containerID="e32378ecfee705d280f3a9f3c8354d53b86c847258469a7da78722ad7f803b0f" exitCode=0 Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.298106 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sfzm5" event={"ID":"d8990fd9-cdfb-4c98-80b3-794f1b371ee5","Type":"ContainerDied","Data":"e32378ecfee705d280f3a9f3c8354d53b86c847258469a7da78722ad7f803b0f"} Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.298181 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sfzm5" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.299212 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sfzm5" event={"ID":"d8990fd9-cdfb-4c98-80b3-794f1b371ee5","Type":"ContainerDied","Data":"c7bc4dd4f53b98fbd0910a84c72954f3ee2a3947e6307b6a639504d73f291e65"} Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.332877 4689 scope.go:117] "RemoveContainer" containerID="643d7e2010835708ae436decbcb29be3da0eb85eab397559f4e36fc3a3bb1aaa" Dec 10 12:20:43 crc kubenswrapper[4689]: E1210 12:20:43.333553 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"643d7e2010835708ae436decbcb29be3da0eb85eab397559f4e36fc3a3bb1aaa\": container with ID starting with 643d7e2010835708ae436decbcb29be3da0eb85eab397559f4e36fc3a3bb1aaa not found: ID does not exist" containerID="643d7e2010835708ae436decbcb29be3da0eb85eab397559f4e36fc3a3bb1aaa" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.333621 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"643d7e2010835708ae436decbcb29be3da0eb85eab397559f4e36fc3a3bb1aaa"} err="failed to get container status \"643d7e2010835708ae436decbcb29be3da0eb85eab397559f4e36fc3a3bb1aaa\": rpc error: code = NotFound desc = could not find container \"643d7e2010835708ae436decbcb29be3da0eb85eab397559f4e36fc3a3bb1aaa\": container with ID starting with 643d7e2010835708ae436decbcb29be3da0eb85eab397559f4e36fc3a3bb1aaa not found: ID does not exist" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.333684 4689 scope.go:117] "RemoveContainer" containerID="e32378ecfee705d280f3a9f3c8354d53b86c847258469a7da78722ad7f803b0f" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.337275 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-htx2z"] Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.342111 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-htx2z"] Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.349259 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sfzm5"] Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.355939 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sfzm5"] Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.356350 4689 scope.go:117] "RemoveContainer" containerID="e32378ecfee705d280f3a9f3c8354d53b86c847258469a7da78722ad7f803b0f" Dec 10 12:20:43 crc kubenswrapper[4689]: E1210 12:20:43.357372 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e32378ecfee705d280f3a9f3c8354d53b86c847258469a7da78722ad7f803b0f\": container with ID starting with e32378ecfee705d280f3a9f3c8354d53b86c847258469a7da78722ad7f803b0f not found: ID does not exist" containerID="e32378ecfee705d280f3a9f3c8354d53b86c847258469a7da78722ad7f803b0f" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.357422 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e32378ecfee705d280f3a9f3c8354d53b86c847258469a7da78722ad7f803b0f"} err="failed to get container status \"e32378ecfee705d280f3a9f3c8354d53b86c847258469a7da78722ad7f803b0f\": rpc error: code = NotFound desc = could not find container \"e32378ecfee705d280f3a9f3c8354d53b86c847258469a7da78722ad7f803b0f\": container with ID starting with e32378ecfee705d280f3a9f3c8354d53b86c847258469a7da78722ad7f803b0f not found: ID does not exist" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.442760 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.620621 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.660704 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.711574 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz"] Dec 10 12:20:43 crc kubenswrapper[4689]: E1210 12:20:43.711826 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd353a2b-c325-44b6-9e25-6a4c39213f9e" containerName="controller-manager" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.711839 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd353a2b-c325-44b6-9e25-6a4c39213f9e" containerName="controller-manager" Dec 10 12:20:43 crc kubenswrapper[4689]: E1210 12:20:43.711859 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8990fd9-cdfb-4c98-80b3-794f1b371ee5" containerName="route-controller-manager" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.711865 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8990fd9-cdfb-4c98-80b3-794f1b371ee5" containerName="route-controller-manager" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.711984 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd353a2b-c325-44b6-9e25-6a4c39213f9e" containerName="controller-manager" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.711994 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8990fd9-cdfb-4c98-80b3-794f1b371ee5" containerName="route-controller-manager" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.712405 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.715297 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.715530 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.715703 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.716498 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.716570 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.716663 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.716920 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c77d95457-kdbvl"] Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.717953 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-kdbvl" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.720078 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.720365 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.720559 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.720799 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.721519 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.722108 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.725949 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.878123 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8njg5\" (UniqueName: \"kubernetes.io/projected/671227b1-605b-4a79-9853-88328a254fe0-kube-api-access-8njg5\") pod \"route-controller-manager-6c77d95457-kdbvl\" (UID: \"671227b1-605b-4a79-9853-88328a254fe0\") " pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-kdbvl" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.878171 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/671227b1-605b-4a79-9853-88328a254fe0-client-ca\") pod \"route-controller-manager-6c77d95457-kdbvl\" (UID: \"671227b1-605b-4a79-9853-88328a254fe0\") " pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-kdbvl" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.878198 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d8c43ef-2ad3-4a36-9698-f3446165bed0-client-ca\") pod \"controller-manager-bfc4b4dfc-5tvjz\" (UID: \"0d8c43ef-2ad3-4a36-9698-f3446165bed0\") " pod="openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.878222 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct8h8\" (UniqueName: \"kubernetes.io/projected/0d8c43ef-2ad3-4a36-9698-f3446165bed0-kube-api-access-ct8h8\") pod \"controller-manager-bfc4b4dfc-5tvjz\" (UID: \"0d8c43ef-2ad3-4a36-9698-f3446165bed0\") " pod="openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.878434 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d8c43ef-2ad3-4a36-9698-f3446165bed0-serving-cert\") pod \"controller-manager-bfc4b4dfc-5tvjz\" (UID: \"0d8c43ef-2ad3-4a36-9698-f3446165bed0\") " pod="openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.878552 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/671227b1-605b-4a79-9853-88328a254fe0-serving-cert\") pod \"route-controller-manager-6c77d95457-kdbvl\" (UID: \"671227b1-605b-4a79-9853-88328a254fe0\") " pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-kdbvl" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.878755 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/671227b1-605b-4a79-9853-88328a254fe0-config\") pod \"route-controller-manager-6c77d95457-kdbvl\" (UID: \"671227b1-605b-4a79-9853-88328a254fe0\") " pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-kdbvl" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.878836 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d8c43ef-2ad3-4a36-9698-f3446165bed0-config\") pod \"controller-manager-bfc4b4dfc-5tvjz\" (UID: \"0d8c43ef-2ad3-4a36-9698-f3446165bed0\") " pod="openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.878870 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d8c43ef-2ad3-4a36-9698-f3446165bed0-proxy-ca-bundles\") pod \"controller-manager-bfc4b4dfc-5tvjz\" (UID: \"0d8c43ef-2ad3-4a36-9698-f3446165bed0\") " pod="openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.980664 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/671227b1-605b-4a79-9853-88328a254fe0-serving-cert\") pod \"route-controller-manager-6c77d95457-kdbvl\" (UID: \"671227b1-605b-4a79-9853-88328a254fe0\") " pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-kdbvl" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.980800 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/671227b1-605b-4a79-9853-88328a254fe0-config\") pod \"route-controller-manager-6c77d95457-kdbvl\" (UID: \"671227b1-605b-4a79-9853-88328a254fe0\") " pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-kdbvl" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.980857 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d8c43ef-2ad3-4a36-9698-f3446165bed0-config\") pod \"controller-manager-bfc4b4dfc-5tvjz\" (UID: \"0d8c43ef-2ad3-4a36-9698-f3446165bed0\") " pod="openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.980893 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d8c43ef-2ad3-4a36-9698-f3446165bed0-proxy-ca-bundles\") pod \"controller-manager-bfc4b4dfc-5tvjz\" (UID: \"0d8c43ef-2ad3-4a36-9698-f3446165bed0\") " pod="openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.980956 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8njg5\" (UniqueName: \"kubernetes.io/projected/671227b1-605b-4a79-9853-88328a254fe0-kube-api-access-8njg5\") pod \"route-controller-manager-6c77d95457-kdbvl\" (UID: \"671227b1-605b-4a79-9853-88328a254fe0\") " pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-kdbvl" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.981028 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/671227b1-605b-4a79-9853-88328a254fe0-client-ca\") pod \"route-controller-manager-6c77d95457-kdbvl\" (UID: \"671227b1-605b-4a79-9853-88328a254fe0\") " pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-kdbvl" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.981061 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d8c43ef-2ad3-4a36-9698-f3446165bed0-client-ca\") pod \"controller-manager-bfc4b4dfc-5tvjz\" (UID: \"0d8c43ef-2ad3-4a36-9698-f3446165bed0\") " pod="openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.981092 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct8h8\" (UniqueName: \"kubernetes.io/projected/0d8c43ef-2ad3-4a36-9698-f3446165bed0-kube-api-access-ct8h8\") pod \"controller-manager-bfc4b4dfc-5tvjz\" (UID: \"0d8c43ef-2ad3-4a36-9698-f3446165bed0\") " pod="openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.981155 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d8c43ef-2ad3-4a36-9698-f3446165bed0-serving-cert\") pod \"controller-manager-bfc4b4dfc-5tvjz\" (UID: \"0d8c43ef-2ad3-4a36-9698-f3446165bed0\") " pod="openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.982269 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d8c43ef-2ad3-4a36-9698-f3446165bed0-config\") pod \"controller-manager-bfc4b4dfc-5tvjz\" (UID: \"0d8c43ef-2ad3-4a36-9698-f3446165bed0\") " pod="openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.983185 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/671227b1-605b-4a79-9853-88328a254fe0-config\") pod \"route-controller-manager-6c77d95457-kdbvl\" (UID: \"671227b1-605b-4a79-9853-88328a254fe0\") " pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-kdbvl" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.983246 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d8c43ef-2ad3-4a36-9698-f3446165bed0-proxy-ca-bundles\") pod \"controller-manager-bfc4b4dfc-5tvjz\" (UID: \"0d8c43ef-2ad3-4a36-9698-f3446165bed0\") " pod="openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.983276 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/671227b1-605b-4a79-9853-88328a254fe0-client-ca\") pod \"route-controller-manager-6c77d95457-kdbvl\" (UID: \"671227b1-605b-4a79-9853-88328a254fe0\") " pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-kdbvl" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.983672 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d8c43ef-2ad3-4a36-9698-f3446165bed0-client-ca\") pod \"controller-manager-bfc4b4dfc-5tvjz\" (UID: \"0d8c43ef-2ad3-4a36-9698-f3446165bed0\") " pod="openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.985741 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/671227b1-605b-4a79-9853-88328a254fe0-serving-cert\") pod \"route-controller-manager-6c77d95457-kdbvl\" (UID: \"671227b1-605b-4a79-9853-88328a254fe0\") " pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-kdbvl" Dec 10 12:20:43 crc kubenswrapper[4689]: I1210 12:20:43.988903 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d8c43ef-2ad3-4a36-9698-f3446165bed0-serving-cert\") pod \"controller-manager-bfc4b4dfc-5tvjz\" (UID: \"0d8c43ef-2ad3-4a36-9698-f3446165bed0\") " pod="openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz" Dec 10 12:20:44 crc kubenswrapper[4689]: I1210 12:20:44.012329 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8njg5\" (UniqueName: \"kubernetes.io/projected/671227b1-605b-4a79-9853-88328a254fe0-kube-api-access-8njg5\") pod \"route-controller-manager-6c77d95457-kdbvl\" (UID: \"671227b1-605b-4a79-9853-88328a254fe0\") " pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-kdbvl" Dec 10 12:20:44 crc kubenswrapper[4689]: I1210 12:20:44.023031 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct8h8\" (UniqueName: \"kubernetes.io/projected/0d8c43ef-2ad3-4a36-9698-f3446165bed0-kube-api-access-ct8h8\") pod \"controller-manager-bfc4b4dfc-5tvjz\" (UID: \"0d8c43ef-2ad3-4a36-9698-f3446165bed0\") " pod="openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz" Dec 10 12:20:44 crc kubenswrapper[4689]: I1210 12:20:44.032858 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz" Dec 10 12:20:44 crc kubenswrapper[4689]: I1210 12:20:44.052347 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-kdbvl" Dec 10 12:20:44 crc kubenswrapper[4689]: I1210 12:20:44.167525 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 10 12:20:44 crc kubenswrapper[4689]: I1210 12:20:44.367115 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 10 12:20:44 crc kubenswrapper[4689]: I1210 12:20:44.410840 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 10 12:20:44 crc kubenswrapper[4689]: I1210 12:20:44.496375 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 10 12:20:44 crc kubenswrapper[4689]: I1210 12:20:44.501611 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 10 12:20:44 crc kubenswrapper[4689]: I1210 12:20:44.510103 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8990fd9-cdfb-4c98-80b3-794f1b371ee5" path="/var/lib/kubelet/pods/d8990fd9-cdfb-4c98-80b3-794f1b371ee5/volumes" Dec 10 12:20:44 crc kubenswrapper[4689]: I1210 12:20:44.511469 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd353a2b-c325-44b6-9e25-6a4c39213f9e" path="/var/lib/kubelet/pods/fd353a2b-c325-44b6-9e25-6a4c39213f9e/volumes" Dec 10 12:20:44 crc kubenswrapper[4689]: I1210 12:20:44.644113 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 10 12:20:44 crc kubenswrapper[4689]: I1210 12:20:44.720057 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 10 12:20:45 crc kubenswrapper[4689]: I1210 12:20:45.820326 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 10 12:20:45 crc kubenswrapper[4689]: I1210 12:20:45.836862 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 10 12:20:46 crc kubenswrapper[4689]: I1210 12:20:46.009799 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-f4f7798bf-m7frn"] Dec 10 12:20:46 crc kubenswrapper[4689]: I1210 12:20:46.016426 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz"] Dec 10 12:20:46 crc kubenswrapper[4689]: I1210 12:20:46.030648 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c77d95457-kdbvl"] Dec 10 12:20:46 crc kubenswrapper[4689]: I1210 12:20:46.309654 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz"] Dec 10 12:20:46 crc kubenswrapper[4689]: I1210 12:20:46.430719 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz"] Dec 10 12:20:46 crc kubenswrapper[4689]: I1210 12:20:46.438479 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c77d95457-kdbvl"] Dec 10 12:20:46 crc kubenswrapper[4689]: I1210 12:20:46.442177 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 10 12:20:46 crc kubenswrapper[4689]: I1210 12:20:46.512547 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 10 12:20:46 crc kubenswrapper[4689]: I1210 12:20:46.569082 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c77d95457-kdbvl"] Dec 10 12:20:46 crc kubenswrapper[4689]: I1210 12:20:46.573858 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-f4f7798bf-m7frn"] Dec 10 12:20:46 crc kubenswrapper[4689]: W1210 12:20:46.579880 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod671227b1_605b_4a79_9853_88328a254fe0.slice/crio-195de0c093eb58855e2cbbcf0e7dda473ccdc039dfd20a2e1699517f2a32e2c6 WatchSource:0}: Error finding container 195de0c093eb58855e2cbbcf0e7dda473ccdc039dfd20a2e1699517f2a32e2c6: Status 404 returned error can't find the container with id 195de0c093eb58855e2cbbcf0e7dda473ccdc039dfd20a2e1699517f2a32e2c6 Dec 10 12:20:46 crc kubenswrapper[4689]: W1210 12:20:46.583625 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda82874c6_baaf_4ecb_a5dd_57d6364ff8f0.slice/crio-66043a5ba5bafb899cac53ad23281bb7e10eb5ef2a82d34149a10cbd3c4b27e9 WatchSource:0}: Error finding container 66043a5ba5bafb899cac53ad23281bb7e10eb5ef2a82d34149a10cbd3c4b27e9: Status 404 returned error can't find the container with id 66043a5ba5bafb899cac53ad23281bb7e10eb5ef2a82d34149a10cbd3c4b27e9 Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.115420 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.153343 4689 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.341910 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz" event={"ID":"0d8c43ef-2ad3-4a36-9698-f3446165bed0","Type":"ContainerStarted","Data":"ba2a85be3e04e4e527aeae64f1df727531f6ff961c04163de8af7c2acd2a952d"} Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.341959 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz" event={"ID":"0d8c43ef-2ad3-4a36-9698-f3446165bed0","Type":"ContainerStarted","Data":"e416906868a8927a2c74268a819e7af37bf44fdc4af5a1c9b4add4ca049e9e58"} Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.342053 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz" podUID="0d8c43ef-2ad3-4a36-9698-f3446165bed0" containerName="controller-manager" containerID="cri-o://ba2a85be3e04e4e527aeae64f1df727531f6ff961c04163de8af7c2acd2a952d" gracePeriod=30 Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.342233 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.348019 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.348572 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-kdbvl" event={"ID":"671227b1-605b-4a79-9853-88328a254fe0","Type":"ContainerStarted","Data":"79ca007ed77b64f0333f00b5c4f966290aefacc166ad002291f44591dc9ffe10"} Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.348608 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-kdbvl" event={"ID":"671227b1-605b-4a79-9853-88328a254fe0","Type":"ContainerStarted","Data":"195de0c093eb58855e2cbbcf0e7dda473ccdc039dfd20a2e1699517f2a32e2c6"} Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.348713 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-kdbvl" podUID="671227b1-605b-4a79-9853-88328a254fe0" containerName="route-controller-manager" containerID="cri-o://79ca007ed77b64f0333f00b5c4f966290aefacc166ad002291f44591dc9ffe10" gracePeriod=30 Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.348984 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-kdbvl" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.352424 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" event={"ID":"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0","Type":"ContainerStarted","Data":"867ad2155a3a325573b7a4cb7aeddfea5946335589f19e7d7d26f36c2bda9186"} Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.352483 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" event={"ID":"a82874c6-baaf-4ecb-a5dd-57d6364ff8f0","Type":"ContainerStarted","Data":"66043a5ba5bafb899cac53ad23281bb7e10eb5ef2a82d34149a10cbd3c4b27e9"} Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.352668 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.360682 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz" podStartSLOduration=5.360663585 podStartE2EDuration="5.360663585s" podCreationTimestamp="2025-12-10 12:20:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:20:47.358117319 +0000 UTC m=+315.146198487" watchObservedRunningTime="2025-12-10 12:20:47.360663585 +0000 UTC m=+315.148744723" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.385665 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-kdbvl" podStartSLOduration=5.385600069 podStartE2EDuration="5.385600069s" podCreationTimestamp="2025-12-10 12:20:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:20:47.378575488 +0000 UTC m=+315.166656616" watchObservedRunningTime="2025-12-10 12:20:47.385600069 +0000 UTC m=+315.173681227" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.392297 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.401808 4689 patch_prober.go:28] interesting pod/route-controller-manager-6c77d95457-kdbvl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:45950->10.217.0.58:8443: read: connection reset by peer" start-of-body= Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.401858 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-kdbvl" podUID="671227b1-605b-4a79-9853-88328a254fe0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:45950->10.217.0.58:8443: read: connection reset by peer" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.413241 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.439265 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-f4f7798bf-m7frn" podStartSLOduration=91.439248835 podStartE2EDuration="1m31.439248835s" podCreationTimestamp="2025-12-10 12:19:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:20:47.419204447 +0000 UTC m=+315.207285585" watchObservedRunningTime="2025-12-10 12:20:47.439248835 +0000 UTC m=+315.227329973" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.480877 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.674950 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.800495 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.815855 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6c77d95457-kdbvl_671227b1-605b-4a79-9853-88328a254fe0/route-controller-manager/0.log" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.816394 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-kdbvl" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.824857 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.830518 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q"] Dec 10 12:20:47 crc kubenswrapper[4689]: E1210 12:20:47.830830 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="671227b1-605b-4a79-9853-88328a254fe0" containerName="route-controller-manager" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.830860 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="671227b1-605b-4a79-9853-88328a254fe0" containerName="route-controller-manager" Dec 10 12:20:47 crc kubenswrapper[4689]: E1210 12:20:47.830888 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d8c43ef-2ad3-4a36-9698-f3446165bed0" containerName="controller-manager" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.830897 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d8c43ef-2ad3-4a36-9698-f3446165bed0" containerName="controller-manager" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.831062 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d8c43ef-2ad3-4a36-9698-f3446165bed0" containerName="controller-manager" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.831074 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="671227b1-605b-4a79-9853-88328a254fe0" containerName="route-controller-manager" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.831458 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.851252 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q"] Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.945640 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/671227b1-605b-4a79-9853-88328a254fe0-client-ca\") pod \"671227b1-605b-4a79-9853-88328a254fe0\" (UID: \"671227b1-605b-4a79-9853-88328a254fe0\") " Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.945721 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d8c43ef-2ad3-4a36-9698-f3446165bed0-serving-cert\") pod \"0d8c43ef-2ad3-4a36-9698-f3446165bed0\" (UID: \"0d8c43ef-2ad3-4a36-9698-f3446165bed0\") " Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.945761 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d8c43ef-2ad3-4a36-9698-f3446165bed0-config\") pod \"0d8c43ef-2ad3-4a36-9698-f3446165bed0\" (UID: \"0d8c43ef-2ad3-4a36-9698-f3446165bed0\") " Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.945797 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d8c43ef-2ad3-4a36-9698-f3446165bed0-client-ca\") pod \"0d8c43ef-2ad3-4a36-9698-f3446165bed0\" (UID: \"0d8c43ef-2ad3-4a36-9698-f3446165bed0\") " Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.945834 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ct8h8\" (UniqueName: \"kubernetes.io/projected/0d8c43ef-2ad3-4a36-9698-f3446165bed0-kube-api-access-ct8h8\") pod \"0d8c43ef-2ad3-4a36-9698-f3446165bed0\" (UID: \"0d8c43ef-2ad3-4a36-9698-f3446165bed0\") " Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.945870 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/671227b1-605b-4a79-9853-88328a254fe0-config\") pod \"671227b1-605b-4a79-9853-88328a254fe0\" (UID: \"671227b1-605b-4a79-9853-88328a254fe0\") " Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.945893 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d8c43ef-2ad3-4a36-9698-f3446165bed0-proxy-ca-bundles\") pod \"0d8c43ef-2ad3-4a36-9698-f3446165bed0\" (UID: \"0d8c43ef-2ad3-4a36-9698-f3446165bed0\") " Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.946590 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d8c43ef-2ad3-4a36-9698-f3446165bed0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0d8c43ef-2ad3-4a36-9698-f3446165bed0" (UID: "0d8c43ef-2ad3-4a36-9698-f3446165bed0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.946602 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/671227b1-605b-4a79-9853-88328a254fe0-client-ca" (OuterVolumeSpecName: "client-ca") pod "671227b1-605b-4a79-9853-88328a254fe0" (UID: "671227b1-605b-4a79-9853-88328a254fe0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.946611 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d8c43ef-2ad3-4a36-9698-f3446165bed0-client-ca" (OuterVolumeSpecName: "client-ca") pod "0d8c43ef-2ad3-4a36-9698-f3446165bed0" (UID: "0d8c43ef-2ad3-4a36-9698-f3446165bed0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.946680 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/671227b1-605b-4a79-9853-88328a254fe0-config" (OuterVolumeSpecName: "config") pod "671227b1-605b-4a79-9853-88328a254fe0" (UID: "671227b1-605b-4a79-9853-88328a254fe0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.946730 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/671227b1-605b-4a79-9853-88328a254fe0-serving-cert\") pod \"671227b1-605b-4a79-9853-88328a254fe0\" (UID: \"671227b1-605b-4a79-9853-88328a254fe0\") " Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.946777 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8njg5\" (UniqueName: \"kubernetes.io/projected/671227b1-605b-4a79-9853-88328a254fe0-kube-api-access-8njg5\") pod \"671227b1-605b-4a79-9853-88328a254fe0\" (UID: \"671227b1-605b-4a79-9853-88328a254fe0\") " Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.947009 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9jxd\" (UniqueName: \"kubernetes.io/projected/9878739d-e4ed-446c-82e9-3bf95dee5f97-kube-api-access-t9jxd\") pod \"controller-manager-755cf4cbd7-tmq8q\" (UID: \"9878739d-e4ed-446c-82e9-3bf95dee5f97\") " pod="openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.947098 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9878739d-e4ed-446c-82e9-3bf95dee5f97-config\") pod \"controller-manager-755cf4cbd7-tmq8q\" (UID: \"9878739d-e4ed-446c-82e9-3bf95dee5f97\") " pod="openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.947206 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9878739d-e4ed-446c-82e9-3bf95dee5f97-client-ca\") pod \"controller-manager-755cf4cbd7-tmq8q\" (UID: \"9878739d-e4ed-446c-82e9-3bf95dee5f97\") " pod="openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.947220 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d8c43ef-2ad3-4a36-9698-f3446165bed0-config" (OuterVolumeSpecName: "config") pod "0d8c43ef-2ad3-4a36-9698-f3446165bed0" (UID: "0d8c43ef-2ad3-4a36-9698-f3446165bed0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.947287 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9878739d-e4ed-446c-82e9-3bf95dee5f97-proxy-ca-bundles\") pod \"controller-manager-755cf4cbd7-tmq8q\" (UID: \"9878739d-e4ed-446c-82e9-3bf95dee5f97\") " pod="openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.947337 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9878739d-e4ed-446c-82e9-3bf95dee5f97-serving-cert\") pod \"controller-manager-755cf4cbd7-tmq8q\" (UID: \"9878739d-e4ed-446c-82e9-3bf95dee5f97\") " pod="openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.951314 4689 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/671227b1-605b-4a79-9853-88328a254fe0-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.960165 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d8c43ef-2ad3-4a36-9698-f3446165bed0-kube-api-access-ct8h8" (OuterVolumeSpecName: "kube-api-access-ct8h8") pod "0d8c43ef-2ad3-4a36-9698-f3446165bed0" (UID: "0d8c43ef-2ad3-4a36-9698-f3446165bed0"). InnerVolumeSpecName "kube-api-access-ct8h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.960739 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/671227b1-605b-4a79-9853-88328a254fe0-kube-api-access-8njg5" (OuterVolumeSpecName: "kube-api-access-8njg5") pod "671227b1-605b-4a79-9853-88328a254fe0" (UID: "671227b1-605b-4a79-9853-88328a254fe0"). InnerVolumeSpecName "kube-api-access-8njg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.961048 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d8c43ef-2ad3-4a36-9698-f3446165bed0-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.961131 4689 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d8c43ef-2ad3-4a36-9698-f3446165bed0-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.961166 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/671227b1-605b-4a79-9853-88328a254fe0-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.961204 4689 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d8c43ef-2ad3-4a36-9698-f3446165bed0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.961545 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d8c43ef-2ad3-4a36-9698-f3446165bed0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0d8c43ef-2ad3-4a36-9698-f3446165bed0" (UID: "0d8c43ef-2ad3-4a36-9698-f3446165bed0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:20:47 crc kubenswrapper[4689]: I1210 12:20:47.961586 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/671227b1-605b-4a79-9853-88328a254fe0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "671227b1-605b-4a79-9853-88328a254fe0" (UID: "671227b1-605b-4a79-9853-88328a254fe0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.063769 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9jxd\" (UniqueName: \"kubernetes.io/projected/9878739d-e4ed-446c-82e9-3bf95dee5f97-kube-api-access-t9jxd\") pod \"controller-manager-755cf4cbd7-tmq8q\" (UID: \"9878739d-e4ed-446c-82e9-3bf95dee5f97\") " pod="openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.063823 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9878739d-e4ed-446c-82e9-3bf95dee5f97-config\") pod \"controller-manager-755cf4cbd7-tmq8q\" (UID: \"9878739d-e4ed-446c-82e9-3bf95dee5f97\") " pod="openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.063858 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9878739d-e4ed-446c-82e9-3bf95dee5f97-client-ca\") pod \"controller-manager-755cf4cbd7-tmq8q\" (UID: \"9878739d-e4ed-446c-82e9-3bf95dee5f97\") " pod="openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.063877 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9878739d-e4ed-446c-82e9-3bf95dee5f97-proxy-ca-bundles\") pod \"controller-manager-755cf4cbd7-tmq8q\" (UID: \"9878739d-e4ed-446c-82e9-3bf95dee5f97\") " pod="openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.063892 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9878739d-e4ed-446c-82e9-3bf95dee5f97-serving-cert\") pod \"controller-manager-755cf4cbd7-tmq8q\" (UID: \"9878739d-e4ed-446c-82e9-3bf95dee5f97\") " pod="openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.063986 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8njg5\" (UniqueName: \"kubernetes.io/projected/671227b1-605b-4a79-9853-88328a254fe0-kube-api-access-8njg5\") on node \"crc\" DevicePath \"\"" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.063998 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d8c43ef-2ad3-4a36-9698-f3446165bed0-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.064008 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ct8h8\" (UniqueName: \"kubernetes.io/projected/0d8c43ef-2ad3-4a36-9698-f3446165bed0-kube-api-access-ct8h8\") on node \"crc\" DevicePath \"\"" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.064017 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/671227b1-605b-4a79-9853-88328a254fe0-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.065514 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9878739d-e4ed-446c-82e9-3bf95dee5f97-client-ca\") pod \"controller-manager-755cf4cbd7-tmq8q\" (UID: \"9878739d-e4ed-446c-82e9-3bf95dee5f97\") " pod="openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.066439 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9878739d-e4ed-446c-82e9-3bf95dee5f97-config\") pod \"controller-manager-755cf4cbd7-tmq8q\" (UID: \"9878739d-e4ed-446c-82e9-3bf95dee5f97\") " pod="openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.066464 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9878739d-e4ed-446c-82e9-3bf95dee5f97-proxy-ca-bundles\") pod \"controller-manager-755cf4cbd7-tmq8q\" (UID: \"9878739d-e4ed-446c-82e9-3bf95dee5f97\") " pod="openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.070133 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9878739d-e4ed-446c-82e9-3bf95dee5f97-serving-cert\") pod \"controller-manager-755cf4cbd7-tmq8q\" (UID: \"9878739d-e4ed-446c-82e9-3bf95dee5f97\") " pod="openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.094476 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9jxd\" (UniqueName: \"kubernetes.io/projected/9878739d-e4ed-446c-82e9-3bf95dee5f97-kube-api-access-t9jxd\") pod \"controller-manager-755cf4cbd7-tmq8q\" (UID: \"9878739d-e4ed-446c-82e9-3bf95dee5f97\") " pod="openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.160330 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.190076 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.363286 4689 generic.go:334] "Generic (PLEG): container finished" podID="0d8c43ef-2ad3-4a36-9698-f3446165bed0" containerID="ba2a85be3e04e4e527aeae64f1df727531f6ff961c04163de8af7c2acd2a952d" exitCode=0 Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.363362 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz" event={"ID":"0d8c43ef-2ad3-4a36-9698-f3446165bed0","Type":"ContainerDied","Data":"ba2a85be3e04e4e527aeae64f1df727531f6ff961c04163de8af7c2acd2a952d"} Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.363367 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.363392 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz" event={"ID":"0d8c43ef-2ad3-4a36-9698-f3446165bed0","Type":"ContainerDied","Data":"e416906868a8927a2c74268a819e7af37bf44fdc4af5a1c9b4add4ca049e9e58"} Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.363416 4689 scope.go:117] "RemoveContainer" containerID="ba2a85be3e04e4e527aeae64f1df727531f6ff961c04163de8af7c2acd2a952d" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.374684 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6c77d95457-kdbvl_671227b1-605b-4a79-9853-88328a254fe0/route-controller-manager/0.log" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.374736 4689 generic.go:334] "Generic (PLEG): container finished" podID="671227b1-605b-4a79-9853-88328a254fe0" containerID="79ca007ed77b64f0333f00b5c4f966290aefacc166ad002291f44591dc9ffe10" exitCode=255 Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.374823 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-kdbvl" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.374823 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-kdbvl" event={"ID":"671227b1-605b-4a79-9853-88328a254fe0","Type":"ContainerDied","Data":"79ca007ed77b64f0333f00b5c4f966290aefacc166ad002291f44591dc9ffe10"} Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.374895 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-kdbvl" event={"ID":"671227b1-605b-4a79-9853-88328a254fe0","Type":"ContainerDied","Data":"195de0c093eb58855e2cbbcf0e7dda473ccdc039dfd20a2e1699517f2a32e2c6"} Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.396486 4689 scope.go:117] "RemoveContainer" containerID="ba2a85be3e04e4e527aeae64f1df727531f6ff961c04163de8af7c2acd2a952d" Dec 10 12:20:48 crc kubenswrapper[4689]: E1210 12:20:48.397221 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba2a85be3e04e4e527aeae64f1df727531f6ff961c04163de8af7c2acd2a952d\": container with ID starting with ba2a85be3e04e4e527aeae64f1df727531f6ff961c04163de8af7c2acd2a952d not found: ID does not exist" containerID="ba2a85be3e04e4e527aeae64f1df727531f6ff961c04163de8af7c2acd2a952d" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.397305 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba2a85be3e04e4e527aeae64f1df727531f6ff961c04163de8af7c2acd2a952d"} err="failed to get container status \"ba2a85be3e04e4e527aeae64f1df727531f6ff961c04163de8af7c2acd2a952d\": rpc error: code = NotFound desc = could not find container \"ba2a85be3e04e4e527aeae64f1df727531f6ff961c04163de8af7c2acd2a952d\": container with ID starting with ba2a85be3e04e4e527aeae64f1df727531f6ff961c04163de8af7c2acd2a952d not found: ID does not exist" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.397380 4689 scope.go:117] "RemoveContainer" containerID="79ca007ed77b64f0333f00b5c4f966290aefacc166ad002291f44591dc9ffe10" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.398722 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz"] Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.416367 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-bfc4b4dfc-5tvjz"] Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.420551 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c77d95457-kdbvl"] Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.425251 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c77d95457-kdbvl"] Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.439747 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.440037 4689 scope.go:117] "RemoveContainer" containerID="79ca007ed77b64f0333f00b5c4f966290aefacc166ad002291f44591dc9ffe10" Dec 10 12:20:48 crc kubenswrapper[4689]: E1210 12:20:48.440589 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79ca007ed77b64f0333f00b5c4f966290aefacc166ad002291f44591dc9ffe10\": container with ID starting with 79ca007ed77b64f0333f00b5c4f966290aefacc166ad002291f44591dc9ffe10 not found: ID does not exist" containerID="79ca007ed77b64f0333f00b5c4f966290aefacc166ad002291f44591dc9ffe10" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.440637 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79ca007ed77b64f0333f00b5c4f966290aefacc166ad002291f44591dc9ffe10"} err="failed to get container status \"79ca007ed77b64f0333f00b5c4f966290aefacc166ad002291f44591dc9ffe10\": rpc error: code = NotFound desc = could not find container \"79ca007ed77b64f0333f00b5c4f966290aefacc166ad002291f44591dc9ffe10\": container with ID starting with 79ca007ed77b64f0333f00b5c4f966290aefacc166ad002291f44591dc9ffe10 not found: ID does not exist" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.475183 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q"] Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.504470 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d8c43ef-2ad3-4a36-9698-f3446165bed0" path="/var/lib/kubelet/pods/0d8c43ef-2ad3-4a36-9698-f3446165bed0/volumes" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.505131 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="671227b1-605b-4a79-9853-88328a254fe0" path="/var/lib/kubelet/pods/671227b1-605b-4a79-9853-88328a254fe0/volumes" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.551660 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.677581 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.763065 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.910337 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 10 12:20:48 crc kubenswrapper[4689]: I1210 12:20:48.940237 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 10 12:20:49 crc kubenswrapper[4689]: I1210 12:20:49.007635 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 10 12:20:49 crc kubenswrapper[4689]: I1210 12:20:49.017825 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 10 12:20:49 crc kubenswrapper[4689]: I1210 12:20:49.043603 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 10 12:20:49 crc kubenswrapper[4689]: I1210 12:20:49.071579 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 10 12:20:49 crc kubenswrapper[4689]: I1210 12:20:49.180396 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 10 12:20:49 crc kubenswrapper[4689]: I1210 12:20:49.383801 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q" event={"ID":"9878739d-e4ed-446c-82e9-3bf95dee5f97","Type":"ContainerStarted","Data":"35b93cecd19f61b8e87c3654028b144e9091df235a1c6c407fdf7223051bc828"} Dec 10 12:20:49 crc kubenswrapper[4689]: I1210 12:20:49.383877 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q" event={"ID":"9878739d-e4ed-446c-82e9-3bf95dee5f97","Type":"ContainerStarted","Data":"5bca3b5a69087e2624449bcff3c5f95f7b3ab3661dcaf0eb488e4341ffc6dea4"} Dec 10 12:20:49 crc kubenswrapper[4689]: I1210 12:20:49.402751 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q" podStartSLOduration=3.402726599 podStartE2EDuration="3.402726599s" podCreationTimestamp="2025-12-10 12:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:20:49.39811849 +0000 UTC m=+317.186199648" watchObservedRunningTime="2025-12-10 12:20:49.402726599 +0000 UTC m=+317.190807727" Dec 10 12:20:49 crc kubenswrapper[4689]: I1210 12:20:49.590542 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 10 12:20:49 crc kubenswrapper[4689]: I1210 12:20:49.680397 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 10 12:20:49 crc kubenswrapper[4689]: I1210 12:20:49.913939 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 10 12:20:49 crc kubenswrapper[4689]: I1210 12:20:49.957842 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 10 12:20:49 crc kubenswrapper[4689]: I1210 12:20:49.996050 4689 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 10 12:20:50 crc kubenswrapper[4689]: I1210 12:20:50.163933 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 10 12:20:50 crc kubenswrapper[4689]: I1210 12:20:50.389669 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q" Dec 10 12:20:50 crc kubenswrapper[4689]: I1210 12:20:50.395106 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q" Dec 10 12:20:50 crc kubenswrapper[4689]: I1210 12:20:50.396577 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 10 12:20:50 crc kubenswrapper[4689]: I1210 12:20:50.461418 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 10 12:20:50 crc kubenswrapper[4689]: I1210 12:20:50.723295 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 10 12:20:50 crc kubenswrapper[4689]: I1210 12:20:50.724479 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7685dbf6b7-cn452"] Dec 10 12:20:50 crc kubenswrapper[4689]: I1210 12:20:50.725632 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7685dbf6b7-cn452" Dec 10 12:20:50 crc kubenswrapper[4689]: I1210 12:20:50.727467 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 10 12:20:50 crc kubenswrapper[4689]: I1210 12:20:50.728262 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 10 12:20:50 crc kubenswrapper[4689]: I1210 12:20:50.728633 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 10 12:20:50 crc kubenswrapper[4689]: I1210 12:20:50.729697 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 10 12:20:50 crc kubenswrapper[4689]: I1210 12:20:50.729949 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 10 12:20:50 crc kubenswrapper[4689]: I1210 12:20:50.730625 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 10 12:20:50 crc kubenswrapper[4689]: I1210 12:20:50.742139 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7685dbf6b7-cn452"] Dec 10 12:20:50 crc kubenswrapper[4689]: I1210 12:20:50.777345 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 10 12:20:50 crc kubenswrapper[4689]: I1210 12:20:50.908718 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhkq2\" (UniqueName: \"kubernetes.io/projected/c0ba491f-6b7f-4f09-84ab-e1227f689d45-kube-api-access-lhkq2\") pod \"route-controller-manager-7685dbf6b7-cn452\" (UID: \"c0ba491f-6b7f-4f09-84ab-e1227f689d45\") " pod="openshift-route-controller-manager/route-controller-manager-7685dbf6b7-cn452" Dec 10 12:20:50 crc kubenswrapper[4689]: I1210 12:20:50.909342 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0ba491f-6b7f-4f09-84ab-e1227f689d45-config\") pod \"route-controller-manager-7685dbf6b7-cn452\" (UID: \"c0ba491f-6b7f-4f09-84ab-e1227f689d45\") " pod="openshift-route-controller-manager/route-controller-manager-7685dbf6b7-cn452" Dec 10 12:20:50 crc kubenswrapper[4689]: I1210 12:20:50.909532 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0ba491f-6b7f-4f09-84ab-e1227f689d45-serving-cert\") pod \"route-controller-manager-7685dbf6b7-cn452\" (UID: \"c0ba491f-6b7f-4f09-84ab-e1227f689d45\") " pod="openshift-route-controller-manager/route-controller-manager-7685dbf6b7-cn452" Dec 10 12:20:50 crc kubenswrapper[4689]: I1210 12:20:50.909658 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0ba491f-6b7f-4f09-84ab-e1227f689d45-client-ca\") pod \"route-controller-manager-7685dbf6b7-cn452\" (UID: \"c0ba491f-6b7f-4f09-84ab-e1227f689d45\") " pod="openshift-route-controller-manager/route-controller-manager-7685dbf6b7-cn452" Dec 10 12:20:51 crc kubenswrapper[4689]: I1210 12:20:51.011766 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0ba491f-6b7f-4f09-84ab-e1227f689d45-client-ca\") pod \"route-controller-manager-7685dbf6b7-cn452\" (UID: \"c0ba491f-6b7f-4f09-84ab-e1227f689d45\") " pod="openshift-route-controller-manager/route-controller-manager-7685dbf6b7-cn452" Dec 10 12:20:51 crc kubenswrapper[4689]: I1210 12:20:51.012303 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhkq2\" (UniqueName: \"kubernetes.io/projected/c0ba491f-6b7f-4f09-84ab-e1227f689d45-kube-api-access-lhkq2\") pod \"route-controller-manager-7685dbf6b7-cn452\" (UID: \"c0ba491f-6b7f-4f09-84ab-e1227f689d45\") " pod="openshift-route-controller-manager/route-controller-manager-7685dbf6b7-cn452" Dec 10 12:20:51 crc kubenswrapper[4689]: I1210 12:20:51.012526 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0ba491f-6b7f-4f09-84ab-e1227f689d45-config\") pod \"route-controller-manager-7685dbf6b7-cn452\" (UID: \"c0ba491f-6b7f-4f09-84ab-e1227f689d45\") " pod="openshift-route-controller-manager/route-controller-manager-7685dbf6b7-cn452" Dec 10 12:20:51 crc kubenswrapper[4689]: I1210 12:20:51.012742 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0ba491f-6b7f-4f09-84ab-e1227f689d45-serving-cert\") pod \"route-controller-manager-7685dbf6b7-cn452\" (UID: \"c0ba491f-6b7f-4f09-84ab-e1227f689d45\") " pod="openshift-route-controller-manager/route-controller-manager-7685dbf6b7-cn452" Dec 10 12:20:51 crc kubenswrapper[4689]: I1210 12:20:51.013640 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0ba491f-6b7f-4f09-84ab-e1227f689d45-client-ca\") pod \"route-controller-manager-7685dbf6b7-cn452\" (UID: \"c0ba491f-6b7f-4f09-84ab-e1227f689d45\") " pod="openshift-route-controller-manager/route-controller-manager-7685dbf6b7-cn452" Dec 10 12:20:51 crc kubenswrapper[4689]: I1210 12:20:51.014966 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0ba491f-6b7f-4f09-84ab-e1227f689d45-config\") pod \"route-controller-manager-7685dbf6b7-cn452\" (UID: \"c0ba491f-6b7f-4f09-84ab-e1227f689d45\") " pod="openshift-route-controller-manager/route-controller-manager-7685dbf6b7-cn452" Dec 10 12:20:51 crc kubenswrapper[4689]: I1210 12:20:51.021787 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0ba491f-6b7f-4f09-84ab-e1227f689d45-serving-cert\") pod \"route-controller-manager-7685dbf6b7-cn452\" (UID: \"c0ba491f-6b7f-4f09-84ab-e1227f689d45\") " pod="openshift-route-controller-manager/route-controller-manager-7685dbf6b7-cn452" Dec 10 12:20:51 crc kubenswrapper[4689]: I1210 12:20:51.040188 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhkq2\" (UniqueName: \"kubernetes.io/projected/c0ba491f-6b7f-4f09-84ab-e1227f689d45-kube-api-access-lhkq2\") pod \"route-controller-manager-7685dbf6b7-cn452\" (UID: \"c0ba491f-6b7f-4f09-84ab-e1227f689d45\") " pod="openshift-route-controller-manager/route-controller-manager-7685dbf6b7-cn452" Dec 10 12:20:51 crc kubenswrapper[4689]: I1210 12:20:51.048285 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7685dbf6b7-cn452" Dec 10 12:20:51 crc kubenswrapper[4689]: I1210 12:20:51.242408 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 10 12:20:51 crc kubenswrapper[4689]: I1210 12:20:51.486030 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7685dbf6b7-cn452"] Dec 10 12:20:51 crc kubenswrapper[4689]: W1210 12:20:51.495058 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0ba491f_6b7f_4f09_84ab_e1227f689d45.slice/crio-68b528c2075638ee1fb72a37d64b715c300a3d4d6f4c01586c8680a5511fd2ee WatchSource:0}: Error finding container 68b528c2075638ee1fb72a37d64b715c300a3d4d6f4c01586c8680a5511fd2ee: Status 404 returned error can't find the container with id 68b528c2075638ee1fb72a37d64b715c300a3d4d6f4c01586c8680a5511fd2ee Dec 10 12:20:51 crc kubenswrapper[4689]: I1210 12:20:51.505056 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 10 12:20:52 crc kubenswrapper[4689]: I1210 12:20:52.405270 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7685dbf6b7-cn452" event={"ID":"c0ba491f-6b7f-4f09-84ab-e1227f689d45","Type":"ContainerStarted","Data":"8ec0dd490205a8cbdb0b53dec6600475a092025d9e69595b98fc16f3d080be21"} Dec 10 12:20:52 crc kubenswrapper[4689]: I1210 12:20:52.405598 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7685dbf6b7-cn452" event={"ID":"c0ba491f-6b7f-4f09-84ab-e1227f689d45","Type":"ContainerStarted","Data":"68b528c2075638ee1fb72a37d64b715c300a3d4d6f4c01586c8680a5511fd2ee"} Dec 10 12:20:52 crc kubenswrapper[4689]: I1210 12:20:52.405766 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7685dbf6b7-cn452" Dec 10 12:20:52 crc kubenswrapper[4689]: I1210 12:20:52.413239 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7685dbf6b7-cn452" Dec 10 12:20:52 crc kubenswrapper[4689]: I1210 12:20:52.429367 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7685dbf6b7-cn452" podStartSLOduration=6.429335313 podStartE2EDuration="6.429335313s" podCreationTimestamp="2025-12-10 12:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:20:52.423553564 +0000 UTC m=+320.211634722" watchObservedRunningTime="2025-12-10 12:20:52.429335313 +0000 UTC m=+320.217416501" Dec 10 12:20:53 crc kubenswrapper[4689]: I1210 12:20:53.144276 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 10 12:20:53 crc kubenswrapper[4689]: I1210 12:20:53.165422 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 10 12:20:53 crc kubenswrapper[4689]: I1210 12:20:53.450137 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 10 12:20:53 crc kubenswrapper[4689]: I1210 12:20:53.468640 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 10 12:20:53 crc kubenswrapper[4689]: I1210 12:20:53.623346 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 10 12:20:53 crc kubenswrapper[4689]: I1210 12:20:53.766449 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 10 12:20:54 crc kubenswrapper[4689]: I1210 12:20:54.187798 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 10 12:20:54 crc kubenswrapper[4689]: I1210 12:20:54.533876 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 10 12:20:54 crc kubenswrapper[4689]: I1210 12:20:54.767255 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 10 12:20:54 crc kubenswrapper[4689]: I1210 12:20:54.920419 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 10 12:20:56 crc kubenswrapper[4689]: I1210 12:20:56.022814 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 10 12:20:56 crc kubenswrapper[4689]: I1210 12:20:56.314112 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 10 12:20:56 crc kubenswrapper[4689]: I1210 12:20:56.647675 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 10 12:20:56 crc kubenswrapper[4689]: I1210 12:20:56.977895 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 10 12:20:57 crc kubenswrapper[4689]: I1210 12:20:57.911094 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 10 12:20:59 crc kubenswrapper[4689]: I1210 12:20:59.731612 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 10 12:21:01 crc kubenswrapper[4689]: I1210 12:21:01.008145 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 10 12:21:01 crc kubenswrapper[4689]: I1210 12:21:01.195307 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 10 12:21:13 crc kubenswrapper[4689]: I1210 12:21:13.429067 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8v6mm"] Dec 10 12:21:13 crc kubenswrapper[4689]: I1210 12:21:13.430495 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-8v6mm" Dec 10 12:21:13 crc kubenswrapper[4689]: I1210 12:21:13.456194 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8v6mm"] Dec 10 12:21:13 crc kubenswrapper[4689]: I1210 12:21:13.573277 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/03f83dfc-2fc4-4d6b-8500-f83e78f73724-trusted-ca\") pod \"image-registry-66df7c8f76-8v6mm\" (UID: \"03f83dfc-2fc4-4d6b-8500-f83e78f73724\") " pod="openshift-image-registry/image-registry-66df7c8f76-8v6mm" Dec 10 12:21:13 crc kubenswrapper[4689]: I1210 12:21:13.573324 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbpqp\" (UniqueName: \"kubernetes.io/projected/03f83dfc-2fc4-4d6b-8500-f83e78f73724-kube-api-access-sbpqp\") pod \"image-registry-66df7c8f76-8v6mm\" (UID: \"03f83dfc-2fc4-4d6b-8500-f83e78f73724\") " pod="openshift-image-registry/image-registry-66df7c8f76-8v6mm" Dec 10 12:21:13 crc kubenswrapper[4689]: I1210 12:21:13.573346 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/03f83dfc-2fc4-4d6b-8500-f83e78f73724-registry-tls\") pod \"image-registry-66df7c8f76-8v6mm\" (UID: \"03f83dfc-2fc4-4d6b-8500-f83e78f73724\") " pod="openshift-image-registry/image-registry-66df7c8f76-8v6mm" Dec 10 12:21:13 crc kubenswrapper[4689]: I1210 12:21:13.573369 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/03f83dfc-2fc4-4d6b-8500-f83e78f73724-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8v6mm\" (UID: \"03f83dfc-2fc4-4d6b-8500-f83e78f73724\") " pod="openshift-image-registry/image-registry-66df7c8f76-8v6mm" Dec 10 12:21:13 crc kubenswrapper[4689]: I1210 12:21:13.573421 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/03f83dfc-2fc4-4d6b-8500-f83e78f73724-registry-certificates\") pod \"image-registry-66df7c8f76-8v6mm\" (UID: \"03f83dfc-2fc4-4d6b-8500-f83e78f73724\") " pod="openshift-image-registry/image-registry-66df7c8f76-8v6mm" Dec 10 12:21:13 crc kubenswrapper[4689]: I1210 12:21:13.573446 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/03f83dfc-2fc4-4d6b-8500-f83e78f73724-bound-sa-token\") pod \"image-registry-66df7c8f76-8v6mm\" (UID: \"03f83dfc-2fc4-4d6b-8500-f83e78f73724\") " pod="openshift-image-registry/image-registry-66df7c8f76-8v6mm" Dec 10 12:21:13 crc kubenswrapper[4689]: I1210 12:21:13.573465 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/03f83dfc-2fc4-4d6b-8500-f83e78f73724-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8v6mm\" (UID: \"03f83dfc-2fc4-4d6b-8500-f83e78f73724\") " pod="openshift-image-registry/image-registry-66df7c8f76-8v6mm" Dec 10 12:21:13 crc kubenswrapper[4689]: I1210 12:21:13.573490 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-8v6mm\" (UID: \"03f83dfc-2fc4-4d6b-8500-f83e78f73724\") " pod="openshift-image-registry/image-registry-66df7c8f76-8v6mm" Dec 10 12:21:13 crc kubenswrapper[4689]: I1210 12:21:13.602513 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-8v6mm\" (UID: \"03f83dfc-2fc4-4d6b-8500-f83e78f73724\") " pod="openshift-image-registry/image-registry-66df7c8f76-8v6mm" Dec 10 12:21:13 crc kubenswrapper[4689]: I1210 12:21:13.674422 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/03f83dfc-2fc4-4d6b-8500-f83e78f73724-registry-certificates\") pod \"image-registry-66df7c8f76-8v6mm\" (UID: \"03f83dfc-2fc4-4d6b-8500-f83e78f73724\") " pod="openshift-image-registry/image-registry-66df7c8f76-8v6mm" Dec 10 12:21:13 crc kubenswrapper[4689]: I1210 12:21:13.674475 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/03f83dfc-2fc4-4d6b-8500-f83e78f73724-bound-sa-token\") pod \"image-registry-66df7c8f76-8v6mm\" (UID: \"03f83dfc-2fc4-4d6b-8500-f83e78f73724\") " pod="openshift-image-registry/image-registry-66df7c8f76-8v6mm" Dec 10 12:21:13 crc kubenswrapper[4689]: I1210 12:21:13.674496 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/03f83dfc-2fc4-4d6b-8500-f83e78f73724-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8v6mm\" (UID: \"03f83dfc-2fc4-4d6b-8500-f83e78f73724\") " pod="openshift-image-registry/image-registry-66df7c8f76-8v6mm" Dec 10 12:21:13 crc kubenswrapper[4689]: I1210 12:21:13.674538 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbpqp\" (UniqueName: \"kubernetes.io/projected/03f83dfc-2fc4-4d6b-8500-f83e78f73724-kube-api-access-sbpqp\") pod \"image-registry-66df7c8f76-8v6mm\" (UID: \"03f83dfc-2fc4-4d6b-8500-f83e78f73724\") " pod="openshift-image-registry/image-registry-66df7c8f76-8v6mm" Dec 10 12:21:13 crc kubenswrapper[4689]: I1210 12:21:13.674559 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/03f83dfc-2fc4-4d6b-8500-f83e78f73724-trusted-ca\") pod \"image-registry-66df7c8f76-8v6mm\" (UID: \"03f83dfc-2fc4-4d6b-8500-f83e78f73724\") " pod="openshift-image-registry/image-registry-66df7c8f76-8v6mm" Dec 10 12:21:13 crc kubenswrapper[4689]: I1210 12:21:13.674578 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/03f83dfc-2fc4-4d6b-8500-f83e78f73724-registry-tls\") pod \"image-registry-66df7c8f76-8v6mm\" (UID: \"03f83dfc-2fc4-4d6b-8500-f83e78f73724\") " pod="openshift-image-registry/image-registry-66df7c8f76-8v6mm" Dec 10 12:21:13 crc kubenswrapper[4689]: I1210 12:21:13.676288 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/03f83dfc-2fc4-4d6b-8500-f83e78f73724-registry-certificates\") pod \"image-registry-66df7c8f76-8v6mm\" (UID: \"03f83dfc-2fc4-4d6b-8500-f83e78f73724\") " pod="openshift-image-registry/image-registry-66df7c8f76-8v6mm" Dec 10 12:21:13 crc kubenswrapper[4689]: I1210 12:21:13.676941 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/03f83dfc-2fc4-4d6b-8500-f83e78f73724-trusted-ca\") pod \"image-registry-66df7c8f76-8v6mm\" (UID: \"03f83dfc-2fc4-4d6b-8500-f83e78f73724\") " pod="openshift-image-registry/image-registry-66df7c8f76-8v6mm" Dec 10 12:21:13 crc kubenswrapper[4689]: I1210 12:21:13.677034 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/03f83dfc-2fc4-4d6b-8500-f83e78f73724-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8v6mm\" (UID: \"03f83dfc-2fc4-4d6b-8500-f83e78f73724\") " pod="openshift-image-registry/image-registry-66df7c8f76-8v6mm" Dec 10 12:21:13 crc kubenswrapper[4689]: I1210 12:21:13.677384 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/03f83dfc-2fc4-4d6b-8500-f83e78f73724-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8v6mm\" (UID: \"03f83dfc-2fc4-4d6b-8500-f83e78f73724\") " pod="openshift-image-registry/image-registry-66df7c8f76-8v6mm" Dec 10 12:21:13 crc kubenswrapper[4689]: I1210 12:21:13.680932 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/03f83dfc-2fc4-4d6b-8500-f83e78f73724-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8v6mm\" (UID: \"03f83dfc-2fc4-4d6b-8500-f83e78f73724\") " pod="openshift-image-registry/image-registry-66df7c8f76-8v6mm" Dec 10 12:21:13 crc kubenswrapper[4689]: I1210 12:21:13.683808 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/03f83dfc-2fc4-4d6b-8500-f83e78f73724-registry-tls\") pod \"image-registry-66df7c8f76-8v6mm\" (UID: \"03f83dfc-2fc4-4d6b-8500-f83e78f73724\") " pod="openshift-image-registry/image-registry-66df7c8f76-8v6mm" Dec 10 12:21:13 crc kubenswrapper[4689]: I1210 12:21:13.690149 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/03f83dfc-2fc4-4d6b-8500-f83e78f73724-bound-sa-token\") pod \"image-registry-66df7c8f76-8v6mm\" (UID: \"03f83dfc-2fc4-4d6b-8500-f83e78f73724\") " pod="openshift-image-registry/image-registry-66df7c8f76-8v6mm" Dec 10 12:21:13 crc kubenswrapper[4689]: I1210 12:21:13.696723 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbpqp\" (UniqueName: \"kubernetes.io/projected/03f83dfc-2fc4-4d6b-8500-f83e78f73724-kube-api-access-sbpqp\") pod \"image-registry-66df7c8f76-8v6mm\" (UID: \"03f83dfc-2fc4-4d6b-8500-f83e78f73724\") " pod="openshift-image-registry/image-registry-66df7c8f76-8v6mm" Dec 10 12:21:13 crc kubenswrapper[4689]: I1210 12:21:13.755319 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-8v6mm" Dec 10 12:21:14 crc kubenswrapper[4689]: I1210 12:21:14.170859 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8v6mm"] Dec 10 12:21:14 crc kubenswrapper[4689]: W1210 12:21:14.175708 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03f83dfc_2fc4_4d6b_8500_f83e78f73724.slice/crio-b3f8be3fa32e146f61b8ae18e6d76835fe552d9b48e4026d9c52a6bbdbf41ab8 WatchSource:0}: Error finding container b3f8be3fa32e146f61b8ae18e6d76835fe552d9b48e4026d9c52a6bbdbf41ab8: Status 404 returned error can't find the container with id b3f8be3fa32e146f61b8ae18e6d76835fe552d9b48e4026d9c52a6bbdbf41ab8 Dec 10 12:21:14 crc kubenswrapper[4689]: I1210 12:21:14.538304 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-8v6mm" event={"ID":"03f83dfc-2fc4-4d6b-8500-f83e78f73724","Type":"ContainerStarted","Data":"42439f52dd8594fb98f5dc29830856e23943a00f949173669120c5d48588a26b"} Dec 10 12:21:14 crc kubenswrapper[4689]: I1210 12:21:14.538785 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-8v6mm" Dec 10 12:21:14 crc kubenswrapper[4689]: I1210 12:21:14.538812 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-8v6mm" event={"ID":"03f83dfc-2fc4-4d6b-8500-f83e78f73724","Type":"ContainerStarted","Data":"b3f8be3fa32e146f61b8ae18e6d76835fe552d9b48e4026d9c52a6bbdbf41ab8"} Dec 10 12:21:14 crc kubenswrapper[4689]: I1210 12:21:14.568021 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-8v6mm" podStartSLOduration=1.567937039 podStartE2EDuration="1.567937039s" podCreationTimestamp="2025-12-10 12:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:21:14.565506696 +0000 UTC m=+342.353587894" watchObservedRunningTime="2025-12-10 12:21:14.567937039 +0000 UTC m=+342.356018187" Dec 10 12:21:20 crc kubenswrapper[4689]: I1210 12:21:20.768730 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-85rm9"] Dec 10 12:21:20 crc kubenswrapper[4689]: I1210 12:21:20.769517 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-85rm9" podUID="2d87c861-5bb2-4f79-8959-b2cebc28156d" containerName="registry-server" containerID="cri-o://4a29722948055d63dab5417640525cc61e022fe47c9d76cae94afadbf84565c1" gracePeriod=2 Dec 10 12:21:21 crc kubenswrapper[4689]: I1210 12:21:21.207589 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-85rm9" Dec 10 12:21:21 crc kubenswrapper[4689]: I1210 12:21:21.286266 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d87c861-5bb2-4f79-8959-b2cebc28156d-catalog-content\") pod \"2d87c861-5bb2-4f79-8959-b2cebc28156d\" (UID: \"2d87c861-5bb2-4f79-8959-b2cebc28156d\") " Dec 10 12:21:21 crc kubenswrapper[4689]: I1210 12:21:21.286425 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d87c861-5bb2-4f79-8959-b2cebc28156d-utilities\") pod \"2d87c861-5bb2-4f79-8959-b2cebc28156d\" (UID: \"2d87c861-5bb2-4f79-8959-b2cebc28156d\") " Dec 10 12:21:21 crc kubenswrapper[4689]: I1210 12:21:21.286469 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknxs\" (UniqueName: \"kubernetes.io/projected/2d87c861-5bb2-4f79-8959-b2cebc28156d-kube-api-access-tknxs\") pod \"2d87c861-5bb2-4f79-8959-b2cebc28156d\" (UID: \"2d87c861-5bb2-4f79-8959-b2cebc28156d\") " Dec 10 12:21:21 crc kubenswrapper[4689]: I1210 12:21:21.287534 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d87c861-5bb2-4f79-8959-b2cebc28156d-utilities" (OuterVolumeSpecName: "utilities") pod "2d87c861-5bb2-4f79-8959-b2cebc28156d" (UID: "2d87c861-5bb2-4f79-8959-b2cebc28156d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:21:21 crc kubenswrapper[4689]: I1210 12:21:21.291632 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d87c861-5bb2-4f79-8959-b2cebc28156d-kube-api-access-tknxs" (OuterVolumeSpecName: "kube-api-access-tknxs") pod "2d87c861-5bb2-4f79-8959-b2cebc28156d" (UID: "2d87c861-5bb2-4f79-8959-b2cebc28156d"). InnerVolumeSpecName "kube-api-access-tknxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:21:21 crc kubenswrapper[4689]: I1210 12:21:21.353306 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d87c861-5bb2-4f79-8959-b2cebc28156d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d87c861-5bb2-4f79-8959-b2cebc28156d" (UID: "2d87c861-5bb2-4f79-8959-b2cebc28156d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:21:21 crc kubenswrapper[4689]: I1210 12:21:21.388135 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d87c861-5bb2-4f79-8959-b2cebc28156d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:21 crc kubenswrapper[4689]: I1210 12:21:21.388167 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d87c861-5bb2-4f79-8959-b2cebc28156d-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:21 crc kubenswrapper[4689]: I1210 12:21:21.388179 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tknxs\" (UniqueName: \"kubernetes.io/projected/2d87c861-5bb2-4f79-8959-b2cebc28156d-kube-api-access-tknxs\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:21 crc kubenswrapper[4689]: I1210 12:21:21.587729 4689 generic.go:334] "Generic (PLEG): container finished" podID="2d87c861-5bb2-4f79-8959-b2cebc28156d" containerID="4a29722948055d63dab5417640525cc61e022fe47c9d76cae94afadbf84565c1" exitCode=0 Dec 10 12:21:21 crc kubenswrapper[4689]: I1210 12:21:21.587779 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85rm9" event={"ID":"2d87c861-5bb2-4f79-8959-b2cebc28156d","Type":"ContainerDied","Data":"4a29722948055d63dab5417640525cc61e022fe47c9d76cae94afadbf84565c1"} Dec 10 12:21:21 crc kubenswrapper[4689]: I1210 12:21:21.587837 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85rm9" event={"ID":"2d87c861-5bb2-4f79-8959-b2cebc28156d","Type":"ContainerDied","Data":"2686e129fd9e5dcc29247adfd1ed0f8778af60e5a30f76cf45f97ca92fb32b25"} Dec 10 12:21:21 crc kubenswrapper[4689]: I1210 12:21:21.587845 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-85rm9" Dec 10 12:21:21 crc kubenswrapper[4689]: I1210 12:21:21.587860 4689 scope.go:117] "RemoveContainer" containerID="4a29722948055d63dab5417640525cc61e022fe47c9d76cae94afadbf84565c1" Dec 10 12:21:21 crc kubenswrapper[4689]: I1210 12:21:21.608684 4689 scope.go:117] "RemoveContainer" containerID="8f9c1bb135c9d1dc186e824e391fbe8175124ebab3ca118d7f460031fae6cb3b" Dec 10 12:21:21 crc kubenswrapper[4689]: I1210 12:21:21.630443 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-85rm9"] Dec 10 12:21:21 crc kubenswrapper[4689]: I1210 12:21:21.636077 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-85rm9"] Dec 10 12:21:21 crc kubenswrapper[4689]: I1210 12:21:21.648101 4689 scope.go:117] "RemoveContainer" containerID="46d2e3166a55adf8a416e4d84daf987c0a52189f2858d28a329073a726b583a4" Dec 10 12:21:21 crc kubenswrapper[4689]: I1210 12:21:21.675698 4689 scope.go:117] "RemoveContainer" containerID="4a29722948055d63dab5417640525cc61e022fe47c9d76cae94afadbf84565c1" Dec 10 12:21:21 crc kubenswrapper[4689]: E1210 12:21:21.676817 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a29722948055d63dab5417640525cc61e022fe47c9d76cae94afadbf84565c1\": container with ID starting with 4a29722948055d63dab5417640525cc61e022fe47c9d76cae94afadbf84565c1 not found: ID does not exist" containerID="4a29722948055d63dab5417640525cc61e022fe47c9d76cae94afadbf84565c1" Dec 10 12:21:21 crc kubenswrapper[4689]: I1210 12:21:21.677182 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a29722948055d63dab5417640525cc61e022fe47c9d76cae94afadbf84565c1"} err="failed to get container status \"4a29722948055d63dab5417640525cc61e022fe47c9d76cae94afadbf84565c1\": rpc error: code = NotFound desc = could not find container \"4a29722948055d63dab5417640525cc61e022fe47c9d76cae94afadbf84565c1\": container with ID starting with 4a29722948055d63dab5417640525cc61e022fe47c9d76cae94afadbf84565c1 not found: ID does not exist" Dec 10 12:21:21 crc kubenswrapper[4689]: I1210 12:21:21.677357 4689 scope.go:117] "RemoveContainer" containerID="8f9c1bb135c9d1dc186e824e391fbe8175124ebab3ca118d7f460031fae6cb3b" Dec 10 12:21:21 crc kubenswrapper[4689]: E1210 12:21:21.677924 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f9c1bb135c9d1dc186e824e391fbe8175124ebab3ca118d7f460031fae6cb3b\": container with ID starting with 8f9c1bb135c9d1dc186e824e391fbe8175124ebab3ca118d7f460031fae6cb3b not found: ID does not exist" containerID="8f9c1bb135c9d1dc186e824e391fbe8175124ebab3ca118d7f460031fae6cb3b" Dec 10 12:21:21 crc kubenswrapper[4689]: I1210 12:21:21.677954 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f9c1bb135c9d1dc186e824e391fbe8175124ebab3ca118d7f460031fae6cb3b"} err="failed to get container status \"8f9c1bb135c9d1dc186e824e391fbe8175124ebab3ca118d7f460031fae6cb3b\": rpc error: code = NotFound desc = could not find container \"8f9c1bb135c9d1dc186e824e391fbe8175124ebab3ca118d7f460031fae6cb3b\": container with ID starting with 8f9c1bb135c9d1dc186e824e391fbe8175124ebab3ca118d7f460031fae6cb3b not found: ID does not exist" Dec 10 12:21:21 crc kubenswrapper[4689]: I1210 12:21:21.677996 4689 scope.go:117] "RemoveContainer" containerID="46d2e3166a55adf8a416e4d84daf987c0a52189f2858d28a329073a726b583a4" Dec 10 12:21:21 crc kubenswrapper[4689]: E1210 12:21:21.678435 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46d2e3166a55adf8a416e4d84daf987c0a52189f2858d28a329073a726b583a4\": container with ID starting with 46d2e3166a55adf8a416e4d84daf987c0a52189f2858d28a329073a726b583a4 not found: ID does not exist" containerID="46d2e3166a55adf8a416e4d84daf987c0a52189f2858d28a329073a726b583a4" Dec 10 12:21:21 crc kubenswrapper[4689]: I1210 12:21:21.678604 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46d2e3166a55adf8a416e4d84daf987c0a52189f2858d28a329073a726b583a4"} err="failed to get container status \"46d2e3166a55adf8a416e4d84daf987c0a52189f2858d28a329073a726b583a4\": rpc error: code = NotFound desc = could not find container \"46d2e3166a55adf8a416e4d84daf987c0a52189f2858d28a329073a726b583a4\": container with ID starting with 46d2e3166a55adf8a416e4d84daf987c0a52189f2858d28a329073a726b583a4 not found: ID does not exist" Dec 10 12:21:22 crc kubenswrapper[4689]: I1210 12:21:22.257187 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7685dbf6b7-cn452"] Dec 10 12:21:22 crc kubenswrapper[4689]: I1210 12:21:22.258959 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7685dbf6b7-cn452" podUID="c0ba491f-6b7f-4f09-84ab-e1227f689d45" containerName="route-controller-manager" containerID="cri-o://8ec0dd490205a8cbdb0b53dec6600475a092025d9e69595b98fc16f3d080be21" gracePeriod=30 Dec 10 12:21:22 crc kubenswrapper[4689]: I1210 12:21:22.504690 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d87c861-5bb2-4f79-8959-b2cebc28156d" path="/var/lib/kubelet/pods/2d87c861-5bb2-4f79-8959-b2cebc28156d/volumes" Dec 10 12:21:22 crc kubenswrapper[4689]: I1210 12:21:22.560939 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-727gf"] Dec 10 12:21:22 crc kubenswrapper[4689]: I1210 12:21:22.561665 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-727gf" podUID="fd6edc95-9b5d-4438-b427-fd07f62090b7" containerName="registry-server" containerID="cri-o://4cf68a0dcaf92c2a219f9bbf7eeef07a8476563e9ca04275674db3a7d3e2284d" gracePeriod=2 Dec 10 12:21:22 crc kubenswrapper[4689]: I1210 12:21:22.595771 4689 generic.go:334] "Generic (PLEG): container finished" podID="c0ba491f-6b7f-4f09-84ab-e1227f689d45" containerID="8ec0dd490205a8cbdb0b53dec6600475a092025d9e69595b98fc16f3d080be21" exitCode=0 Dec 10 12:21:22 crc kubenswrapper[4689]: I1210 12:21:22.595867 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7685dbf6b7-cn452" event={"ID":"c0ba491f-6b7f-4f09-84ab-e1227f689d45","Type":"ContainerDied","Data":"8ec0dd490205a8cbdb0b53dec6600475a092025d9e69595b98fc16f3d080be21"} Dec 10 12:21:22 crc kubenswrapper[4689]: I1210 12:21:22.701757 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7685dbf6b7-cn452" Dec 10 12:21:22 crc kubenswrapper[4689]: I1210 12:21:22.809539 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0ba491f-6b7f-4f09-84ab-e1227f689d45-config\") pod \"c0ba491f-6b7f-4f09-84ab-e1227f689d45\" (UID: \"c0ba491f-6b7f-4f09-84ab-e1227f689d45\") " Dec 10 12:21:22 crc kubenswrapper[4689]: I1210 12:21:22.809645 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhkq2\" (UniqueName: \"kubernetes.io/projected/c0ba491f-6b7f-4f09-84ab-e1227f689d45-kube-api-access-lhkq2\") pod \"c0ba491f-6b7f-4f09-84ab-e1227f689d45\" (UID: \"c0ba491f-6b7f-4f09-84ab-e1227f689d45\") " Dec 10 12:21:22 crc kubenswrapper[4689]: I1210 12:21:22.809665 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0ba491f-6b7f-4f09-84ab-e1227f689d45-client-ca\") pod \"c0ba491f-6b7f-4f09-84ab-e1227f689d45\" (UID: \"c0ba491f-6b7f-4f09-84ab-e1227f689d45\") " Dec 10 12:21:22 crc kubenswrapper[4689]: I1210 12:21:22.809714 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0ba491f-6b7f-4f09-84ab-e1227f689d45-serving-cert\") pod \"c0ba491f-6b7f-4f09-84ab-e1227f689d45\" (UID: \"c0ba491f-6b7f-4f09-84ab-e1227f689d45\") " Dec 10 12:21:22 crc kubenswrapper[4689]: I1210 12:21:22.810408 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0ba491f-6b7f-4f09-84ab-e1227f689d45-client-ca" (OuterVolumeSpecName: "client-ca") pod "c0ba491f-6b7f-4f09-84ab-e1227f689d45" (UID: "c0ba491f-6b7f-4f09-84ab-e1227f689d45"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:21:22 crc kubenswrapper[4689]: I1210 12:21:22.810872 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0ba491f-6b7f-4f09-84ab-e1227f689d45-config" (OuterVolumeSpecName: "config") pod "c0ba491f-6b7f-4f09-84ab-e1227f689d45" (UID: "c0ba491f-6b7f-4f09-84ab-e1227f689d45"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:21:22 crc kubenswrapper[4689]: I1210 12:21:22.816193 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0ba491f-6b7f-4f09-84ab-e1227f689d45-kube-api-access-lhkq2" (OuterVolumeSpecName: "kube-api-access-lhkq2") pod "c0ba491f-6b7f-4f09-84ab-e1227f689d45" (UID: "c0ba491f-6b7f-4f09-84ab-e1227f689d45"). InnerVolumeSpecName "kube-api-access-lhkq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:21:22 crc kubenswrapper[4689]: I1210 12:21:22.823186 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0ba491f-6b7f-4f09-84ab-e1227f689d45-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c0ba491f-6b7f-4f09-84ab-e1227f689d45" (UID: "c0ba491f-6b7f-4f09-84ab-e1227f689d45"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:21:22 crc kubenswrapper[4689]: I1210 12:21:22.910858 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0ba491f-6b7f-4f09-84ab-e1227f689d45-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:22 crc kubenswrapper[4689]: I1210 12:21:22.910908 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0ba491f-6b7f-4f09-84ab-e1227f689d45-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:22 crc kubenswrapper[4689]: I1210 12:21:22.910929 4689 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0ba491f-6b7f-4f09-84ab-e1227f689d45-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:22 crc kubenswrapper[4689]: I1210 12:21:22.910948 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhkq2\" (UniqueName: \"kubernetes.io/projected/c0ba491f-6b7f-4f09-84ab-e1227f689d45-kube-api-access-lhkq2\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.008339 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-727gf" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.113444 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8scv\" (UniqueName: \"kubernetes.io/projected/fd6edc95-9b5d-4438-b427-fd07f62090b7-kube-api-access-c8scv\") pod \"fd6edc95-9b5d-4438-b427-fd07f62090b7\" (UID: \"fd6edc95-9b5d-4438-b427-fd07f62090b7\") " Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.113568 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd6edc95-9b5d-4438-b427-fd07f62090b7-utilities\") pod \"fd6edc95-9b5d-4438-b427-fd07f62090b7\" (UID: \"fd6edc95-9b5d-4438-b427-fd07f62090b7\") " Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.113614 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd6edc95-9b5d-4438-b427-fd07f62090b7-catalog-content\") pod \"fd6edc95-9b5d-4438-b427-fd07f62090b7\" (UID: \"fd6edc95-9b5d-4438-b427-fd07f62090b7\") " Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.114770 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd6edc95-9b5d-4438-b427-fd07f62090b7-utilities" (OuterVolumeSpecName: "utilities") pod "fd6edc95-9b5d-4438-b427-fd07f62090b7" (UID: "fd6edc95-9b5d-4438-b427-fd07f62090b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.122288 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd6edc95-9b5d-4438-b427-fd07f62090b7-kube-api-access-c8scv" (OuterVolumeSpecName: "kube-api-access-c8scv") pod "fd6edc95-9b5d-4438-b427-fd07f62090b7" (UID: "fd6edc95-9b5d-4438-b427-fd07f62090b7"). InnerVolumeSpecName "kube-api-access-c8scv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.138865 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd6edc95-9b5d-4438-b427-fd07f62090b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd6edc95-9b5d-4438-b427-fd07f62090b7" (UID: "fd6edc95-9b5d-4438-b427-fd07f62090b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.164950 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-28k8w"] Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.165195 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-28k8w" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" containerName="registry-server" containerID="cri-o://f0dcf96fdf98490b58c1555682c21f58b18746242f0394c9ec98be6fa10c2ee1" gracePeriod=2 Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.214661 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8scv\" (UniqueName: \"kubernetes.io/projected/fd6edc95-9b5d-4438-b427-fd07f62090b7-kube-api-access-c8scv\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.214694 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd6edc95-9b5d-4438-b427-fd07f62090b7-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.214703 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd6edc95-9b5d-4438-b427-fd07f62090b7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.521323 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-28k8w" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.604571 4689 generic.go:334] "Generic (PLEG): container finished" podID="f0bf5778-7d94-4e68-8e6f-308482f98351" containerID="f0dcf96fdf98490b58c1555682c21f58b18746242f0394c9ec98be6fa10c2ee1" exitCode=0 Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.604624 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-28k8w" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.604622 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28k8w" event={"ID":"f0bf5778-7d94-4e68-8e6f-308482f98351","Type":"ContainerDied","Data":"f0dcf96fdf98490b58c1555682c21f58b18746242f0394c9ec98be6fa10c2ee1"} Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.604682 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28k8w" event={"ID":"f0bf5778-7d94-4e68-8e6f-308482f98351","Type":"ContainerDied","Data":"fd1efacf0e69ab8ca75a3bdbfd33e7f0f44d97c5891445efb5ab99d0cde8ac7c"} Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.604706 4689 scope.go:117] "RemoveContainer" containerID="f0dcf96fdf98490b58c1555682c21f58b18746242f0394c9ec98be6fa10c2ee1" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.606234 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7685dbf6b7-cn452" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.607307 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7685dbf6b7-cn452" event={"ID":"c0ba491f-6b7f-4f09-84ab-e1227f689d45","Type":"ContainerDied","Data":"68b528c2075638ee1fb72a37d64b715c300a3d4d6f4c01586c8680a5511fd2ee"} Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.609783 4689 generic.go:334] "Generic (PLEG): container finished" podID="fd6edc95-9b5d-4438-b427-fd07f62090b7" containerID="4cf68a0dcaf92c2a219f9bbf7eeef07a8476563e9ca04275674db3a7d3e2284d" exitCode=0 Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.609813 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-727gf" event={"ID":"fd6edc95-9b5d-4438-b427-fd07f62090b7","Type":"ContainerDied","Data":"4cf68a0dcaf92c2a219f9bbf7eeef07a8476563e9ca04275674db3a7d3e2284d"} Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.609835 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-727gf" event={"ID":"fd6edc95-9b5d-4438-b427-fd07f62090b7","Type":"ContainerDied","Data":"5ba89b7114a5390f038484104f82936b61dea0e0aa93fd26a81ad16ca06fc7aa"} Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.609850 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-727gf" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.618677 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0bf5778-7d94-4e68-8e6f-308482f98351-utilities\") pod \"f0bf5778-7d94-4e68-8e6f-308482f98351\" (UID: \"f0bf5778-7d94-4e68-8e6f-308482f98351\") " Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.618729 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6b2s\" (UniqueName: \"kubernetes.io/projected/f0bf5778-7d94-4e68-8e6f-308482f98351-kube-api-access-k6b2s\") pod \"f0bf5778-7d94-4e68-8e6f-308482f98351\" (UID: \"f0bf5778-7d94-4e68-8e6f-308482f98351\") " Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.618750 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0bf5778-7d94-4e68-8e6f-308482f98351-catalog-content\") pod \"f0bf5778-7d94-4e68-8e6f-308482f98351\" (UID: \"f0bf5778-7d94-4e68-8e6f-308482f98351\") " Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.619394 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0bf5778-7d94-4e68-8e6f-308482f98351-utilities" (OuterVolumeSpecName: "utilities") pod "f0bf5778-7d94-4e68-8e6f-308482f98351" (UID: "f0bf5778-7d94-4e68-8e6f-308482f98351"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.621044 4689 scope.go:117] "RemoveContainer" containerID="8dab37f45edfb8a50ef1bce5b26807b179dafcdd2679b30b0e2fcecb4342d9ea" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.622591 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0bf5778-7d94-4e68-8e6f-308482f98351-kube-api-access-k6b2s" (OuterVolumeSpecName: "kube-api-access-k6b2s") pod "f0bf5778-7d94-4e68-8e6f-308482f98351" (UID: "f0bf5778-7d94-4e68-8e6f-308482f98351"). InnerVolumeSpecName "kube-api-access-k6b2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.632612 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7685dbf6b7-cn452"] Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.645100 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7685dbf6b7-cn452"] Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.650933 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-727gf"] Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.656744 4689 scope.go:117] "RemoveContainer" containerID="4a7ec41f0d24401ab33f58f2bee07a5fa44879b82bb6209ca294b37a085c7748" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.658157 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-727gf"] Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.668761 4689 scope.go:117] "RemoveContainer" containerID="f0dcf96fdf98490b58c1555682c21f58b18746242f0394c9ec98be6fa10c2ee1" Dec 10 12:21:23 crc kubenswrapper[4689]: E1210 12:21:23.669130 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0dcf96fdf98490b58c1555682c21f58b18746242f0394c9ec98be6fa10c2ee1\": container with ID starting with f0dcf96fdf98490b58c1555682c21f58b18746242f0394c9ec98be6fa10c2ee1 not found: ID does not exist" containerID="f0dcf96fdf98490b58c1555682c21f58b18746242f0394c9ec98be6fa10c2ee1" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.669173 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0dcf96fdf98490b58c1555682c21f58b18746242f0394c9ec98be6fa10c2ee1"} err="failed to get container status \"f0dcf96fdf98490b58c1555682c21f58b18746242f0394c9ec98be6fa10c2ee1\": rpc error: code = NotFound desc = could not find container \"f0dcf96fdf98490b58c1555682c21f58b18746242f0394c9ec98be6fa10c2ee1\": container with ID starting with f0dcf96fdf98490b58c1555682c21f58b18746242f0394c9ec98be6fa10c2ee1 not found: ID does not exist" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.669204 4689 scope.go:117] "RemoveContainer" containerID="8dab37f45edfb8a50ef1bce5b26807b179dafcdd2679b30b0e2fcecb4342d9ea" Dec 10 12:21:23 crc kubenswrapper[4689]: E1210 12:21:23.669596 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dab37f45edfb8a50ef1bce5b26807b179dafcdd2679b30b0e2fcecb4342d9ea\": container with ID starting with 8dab37f45edfb8a50ef1bce5b26807b179dafcdd2679b30b0e2fcecb4342d9ea not found: ID does not exist" containerID="8dab37f45edfb8a50ef1bce5b26807b179dafcdd2679b30b0e2fcecb4342d9ea" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.669625 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dab37f45edfb8a50ef1bce5b26807b179dafcdd2679b30b0e2fcecb4342d9ea"} err="failed to get container status \"8dab37f45edfb8a50ef1bce5b26807b179dafcdd2679b30b0e2fcecb4342d9ea\": rpc error: code = NotFound desc = could not find container \"8dab37f45edfb8a50ef1bce5b26807b179dafcdd2679b30b0e2fcecb4342d9ea\": container with ID starting with 8dab37f45edfb8a50ef1bce5b26807b179dafcdd2679b30b0e2fcecb4342d9ea not found: ID does not exist" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.669653 4689 scope.go:117] "RemoveContainer" containerID="4a7ec41f0d24401ab33f58f2bee07a5fa44879b82bb6209ca294b37a085c7748" Dec 10 12:21:23 crc kubenswrapper[4689]: E1210 12:21:23.669967 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a7ec41f0d24401ab33f58f2bee07a5fa44879b82bb6209ca294b37a085c7748\": container with ID starting with 4a7ec41f0d24401ab33f58f2bee07a5fa44879b82bb6209ca294b37a085c7748 not found: ID does not exist" containerID="4a7ec41f0d24401ab33f58f2bee07a5fa44879b82bb6209ca294b37a085c7748" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.670022 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a7ec41f0d24401ab33f58f2bee07a5fa44879b82bb6209ca294b37a085c7748"} err="failed to get container status \"4a7ec41f0d24401ab33f58f2bee07a5fa44879b82bb6209ca294b37a085c7748\": rpc error: code = NotFound desc = could not find container \"4a7ec41f0d24401ab33f58f2bee07a5fa44879b82bb6209ca294b37a085c7748\": container with ID starting with 4a7ec41f0d24401ab33f58f2bee07a5fa44879b82bb6209ca294b37a085c7748 not found: ID does not exist" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.670048 4689 scope.go:117] "RemoveContainer" containerID="8ec0dd490205a8cbdb0b53dec6600475a092025d9e69595b98fc16f3d080be21" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.683224 4689 scope.go:117] "RemoveContainer" containerID="4cf68a0dcaf92c2a219f9bbf7eeef07a8476563e9ca04275674db3a7d3e2284d" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.695955 4689 scope.go:117] "RemoveContainer" containerID="d78aea8e6fa05933eb70508746afb0a58e8832ae9c7035ce95b75c8184a32821" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.709297 4689 scope.go:117] "RemoveContainer" containerID="00a72fb576e629a0d1dab2421b907c3e15d5728dfd433d6fe2f2f4e3299b9955" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.719711 4689 scope.go:117] "RemoveContainer" containerID="4cf68a0dcaf92c2a219f9bbf7eeef07a8476563e9ca04275674db3a7d3e2284d" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.720275 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6b2s\" (UniqueName: \"kubernetes.io/projected/f0bf5778-7d94-4e68-8e6f-308482f98351-kube-api-access-k6b2s\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.720299 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0bf5778-7d94-4e68-8e6f-308482f98351-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:23 crc kubenswrapper[4689]: E1210 12:21:23.720336 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cf68a0dcaf92c2a219f9bbf7eeef07a8476563e9ca04275674db3a7d3e2284d\": container with ID starting with 4cf68a0dcaf92c2a219f9bbf7eeef07a8476563e9ca04275674db3a7d3e2284d not found: ID does not exist" containerID="4cf68a0dcaf92c2a219f9bbf7eeef07a8476563e9ca04275674db3a7d3e2284d" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.720414 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cf68a0dcaf92c2a219f9bbf7eeef07a8476563e9ca04275674db3a7d3e2284d"} err="failed to get container status \"4cf68a0dcaf92c2a219f9bbf7eeef07a8476563e9ca04275674db3a7d3e2284d\": rpc error: code = NotFound desc = could not find container \"4cf68a0dcaf92c2a219f9bbf7eeef07a8476563e9ca04275674db3a7d3e2284d\": container with ID starting with 4cf68a0dcaf92c2a219f9bbf7eeef07a8476563e9ca04275674db3a7d3e2284d not found: ID does not exist" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.720443 4689 scope.go:117] "RemoveContainer" containerID="d78aea8e6fa05933eb70508746afb0a58e8832ae9c7035ce95b75c8184a32821" Dec 10 12:21:23 crc kubenswrapper[4689]: E1210 12:21:23.720769 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d78aea8e6fa05933eb70508746afb0a58e8832ae9c7035ce95b75c8184a32821\": container with ID starting with d78aea8e6fa05933eb70508746afb0a58e8832ae9c7035ce95b75c8184a32821 not found: ID does not exist" containerID="d78aea8e6fa05933eb70508746afb0a58e8832ae9c7035ce95b75c8184a32821" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.720794 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d78aea8e6fa05933eb70508746afb0a58e8832ae9c7035ce95b75c8184a32821"} err="failed to get container status \"d78aea8e6fa05933eb70508746afb0a58e8832ae9c7035ce95b75c8184a32821\": rpc error: code = NotFound desc = could not find container \"d78aea8e6fa05933eb70508746afb0a58e8832ae9c7035ce95b75c8184a32821\": container with ID starting with d78aea8e6fa05933eb70508746afb0a58e8832ae9c7035ce95b75c8184a32821 not found: ID does not exist" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.720808 4689 scope.go:117] "RemoveContainer" containerID="00a72fb576e629a0d1dab2421b907c3e15d5728dfd433d6fe2f2f4e3299b9955" Dec 10 12:21:23 crc kubenswrapper[4689]: E1210 12:21:23.721074 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00a72fb576e629a0d1dab2421b907c3e15d5728dfd433d6fe2f2f4e3299b9955\": container with ID starting with 00a72fb576e629a0d1dab2421b907c3e15d5728dfd433d6fe2f2f4e3299b9955 not found: ID does not exist" containerID="00a72fb576e629a0d1dab2421b907c3e15d5728dfd433d6fe2f2f4e3299b9955" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.721113 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00a72fb576e629a0d1dab2421b907c3e15d5728dfd433d6fe2f2f4e3299b9955"} err="failed to get container status \"00a72fb576e629a0d1dab2421b907c3e15d5728dfd433d6fe2f2f4e3299b9955\": rpc error: code = NotFound desc = could not find container \"00a72fb576e629a0d1dab2421b907c3e15d5728dfd433d6fe2f2f4e3299b9955\": container with ID starting with 00a72fb576e629a0d1dab2421b907c3e15d5728dfd433d6fe2f2f4e3299b9955 not found: ID does not exist" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.743050 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c77d95457-5vszf"] Dec 10 12:21:23 crc kubenswrapper[4689]: E1210 12:21:23.743253 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" containerName="registry-server" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.743265 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" containerName="registry-server" Dec 10 12:21:23 crc kubenswrapper[4689]: E1210 12:21:23.743275 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd6edc95-9b5d-4438-b427-fd07f62090b7" containerName="extract-utilities" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.743281 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd6edc95-9b5d-4438-b427-fd07f62090b7" containerName="extract-utilities" Dec 10 12:21:23 crc kubenswrapper[4689]: E1210 12:21:23.743290 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d87c861-5bb2-4f79-8959-b2cebc28156d" containerName="registry-server" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.743296 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d87c861-5bb2-4f79-8959-b2cebc28156d" containerName="registry-server" Dec 10 12:21:23 crc kubenswrapper[4689]: E1210 12:21:23.743303 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0ba491f-6b7f-4f09-84ab-e1227f689d45" containerName="route-controller-manager" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.743310 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0ba491f-6b7f-4f09-84ab-e1227f689d45" containerName="route-controller-manager" Dec 10 12:21:23 crc kubenswrapper[4689]: E1210 12:21:23.743318 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" containerName="extract-utilities" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.743324 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" containerName="extract-utilities" Dec 10 12:21:23 crc kubenswrapper[4689]: E1210 12:21:23.743332 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" containerName="extract-content" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.743339 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" containerName="extract-content" Dec 10 12:21:23 crc kubenswrapper[4689]: E1210 12:21:23.743350 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d87c861-5bb2-4f79-8959-b2cebc28156d" containerName="extract-utilities" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.743356 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d87c861-5bb2-4f79-8959-b2cebc28156d" containerName="extract-utilities" Dec 10 12:21:23 crc kubenswrapper[4689]: E1210 12:21:23.743365 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd6edc95-9b5d-4438-b427-fd07f62090b7" containerName="registry-server" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.743371 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd6edc95-9b5d-4438-b427-fd07f62090b7" containerName="registry-server" Dec 10 12:21:23 crc kubenswrapper[4689]: E1210 12:21:23.743379 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d87c861-5bb2-4f79-8959-b2cebc28156d" containerName="extract-content" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.743386 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d87c861-5bb2-4f79-8959-b2cebc28156d" containerName="extract-content" Dec 10 12:21:23 crc kubenswrapper[4689]: E1210 12:21:23.743394 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd6edc95-9b5d-4438-b427-fd07f62090b7" containerName="extract-content" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.743399 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd6edc95-9b5d-4438-b427-fd07f62090b7" containerName="extract-content" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.743478 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d87c861-5bb2-4f79-8959-b2cebc28156d" containerName="registry-server" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.743487 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd6edc95-9b5d-4438-b427-fd07f62090b7" containerName="registry-server" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.743496 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" containerName="registry-server" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.743509 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0ba491f-6b7f-4f09-84ab-e1227f689d45" containerName="route-controller-manager" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.743837 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-5vszf" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.745729 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.745941 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.746050 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.746131 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.746508 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.746777 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.753524 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c77d95457-5vszf"] Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.753951 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0bf5778-7d94-4e68-8e6f-308482f98351-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f0bf5778-7d94-4e68-8e6f-308482f98351" (UID: "f0bf5778-7d94-4e68-8e6f-308482f98351"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.821669 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k9qm\" (UniqueName: \"kubernetes.io/projected/fcc54977-d0a4-4d54-b0dd-f03c54f1065d-kube-api-access-7k9qm\") pod \"route-controller-manager-6c77d95457-5vszf\" (UID: \"fcc54977-d0a4-4d54-b0dd-f03c54f1065d\") " pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-5vszf" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.821733 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcc54977-d0a4-4d54-b0dd-f03c54f1065d-serving-cert\") pod \"route-controller-manager-6c77d95457-5vszf\" (UID: \"fcc54977-d0a4-4d54-b0dd-f03c54f1065d\") " pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-5vszf" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.821802 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fcc54977-d0a4-4d54-b0dd-f03c54f1065d-client-ca\") pod \"route-controller-manager-6c77d95457-5vszf\" (UID: \"fcc54977-d0a4-4d54-b0dd-f03c54f1065d\") " pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-5vszf" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.821837 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcc54977-d0a4-4d54-b0dd-f03c54f1065d-config\") pod \"route-controller-manager-6c77d95457-5vszf\" (UID: \"fcc54977-d0a4-4d54-b0dd-f03c54f1065d\") " pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-5vszf" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.821869 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0bf5778-7d94-4e68-8e6f-308482f98351-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.923413 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcc54977-d0a4-4d54-b0dd-f03c54f1065d-serving-cert\") pod \"route-controller-manager-6c77d95457-5vszf\" (UID: \"fcc54977-d0a4-4d54-b0dd-f03c54f1065d\") " pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-5vszf" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.923744 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fcc54977-d0a4-4d54-b0dd-f03c54f1065d-client-ca\") pod \"route-controller-manager-6c77d95457-5vszf\" (UID: \"fcc54977-d0a4-4d54-b0dd-f03c54f1065d\") " pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-5vszf" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.923789 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcc54977-d0a4-4d54-b0dd-f03c54f1065d-config\") pod \"route-controller-manager-6c77d95457-5vszf\" (UID: \"fcc54977-d0a4-4d54-b0dd-f03c54f1065d\") " pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-5vszf" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.923814 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k9qm\" (UniqueName: \"kubernetes.io/projected/fcc54977-d0a4-4d54-b0dd-f03c54f1065d-kube-api-access-7k9qm\") pod \"route-controller-manager-6c77d95457-5vszf\" (UID: \"fcc54977-d0a4-4d54-b0dd-f03c54f1065d\") " pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-5vszf" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.928168 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcc54977-d0a4-4d54-b0dd-f03c54f1065d-config\") pod \"route-controller-manager-6c77d95457-5vszf\" (UID: \"fcc54977-d0a4-4d54-b0dd-f03c54f1065d\") " pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-5vszf" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.930860 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fcc54977-d0a4-4d54-b0dd-f03c54f1065d-client-ca\") pod \"route-controller-manager-6c77d95457-5vszf\" (UID: \"fcc54977-d0a4-4d54-b0dd-f03c54f1065d\") " pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-5vszf" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.934113 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcc54977-d0a4-4d54-b0dd-f03c54f1065d-serving-cert\") pod \"route-controller-manager-6c77d95457-5vszf\" (UID: \"fcc54977-d0a4-4d54-b0dd-f03c54f1065d\") " pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-5vszf" Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.935007 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-28k8w"] Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.956674 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-28k8w"] Dec 10 12:21:23 crc kubenswrapper[4689]: I1210 12:21:23.957377 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k9qm\" (UniqueName: \"kubernetes.io/projected/fcc54977-d0a4-4d54-b0dd-f03c54f1065d-kube-api-access-7k9qm\") pod \"route-controller-manager-6c77d95457-5vszf\" (UID: \"fcc54977-d0a4-4d54-b0dd-f03c54f1065d\") " pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-5vszf" Dec 10 12:21:24 crc kubenswrapper[4689]: I1210 12:21:24.071623 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-5vszf" Dec 10 12:21:24 crc kubenswrapper[4689]: I1210 12:21:24.488040 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c77d95457-5vszf"] Dec 10 12:21:24 crc kubenswrapper[4689]: W1210 12:21:24.492913 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcc54977_d0a4_4d54_b0dd_f03c54f1065d.slice/crio-8e9ae2bdc6bca42af43a53c1b29dac2b0f099c1ac77ba75538075378e5d17661 WatchSource:0}: Error finding container 8e9ae2bdc6bca42af43a53c1b29dac2b0f099c1ac77ba75538075378e5d17661: Status 404 returned error can't find the container with id 8e9ae2bdc6bca42af43a53c1b29dac2b0f099c1ac77ba75538075378e5d17661 Dec 10 12:21:24 crc kubenswrapper[4689]: I1210 12:21:24.506067 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0ba491f-6b7f-4f09-84ab-e1227f689d45" path="/var/lib/kubelet/pods/c0ba491f-6b7f-4f09-84ab-e1227f689d45/volumes" Dec 10 12:21:24 crc kubenswrapper[4689]: I1210 12:21:24.506514 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0bf5778-7d94-4e68-8e6f-308482f98351" path="/var/lib/kubelet/pods/f0bf5778-7d94-4e68-8e6f-308482f98351/volumes" Dec 10 12:21:24 crc kubenswrapper[4689]: I1210 12:21:24.507234 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd6edc95-9b5d-4438-b427-fd07f62090b7" path="/var/lib/kubelet/pods/fd6edc95-9b5d-4438-b427-fd07f62090b7/volumes" Dec 10 12:21:24 crc kubenswrapper[4689]: I1210 12:21:24.619074 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-5vszf" event={"ID":"fcc54977-d0a4-4d54-b0dd-f03c54f1065d","Type":"ContainerStarted","Data":"8e9ae2bdc6bca42af43a53c1b29dac2b0f099c1ac77ba75538075378e5d17661"} Dec 10 12:21:25 crc kubenswrapper[4689]: I1210 12:21:25.627782 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-5vszf" event={"ID":"fcc54977-d0a4-4d54-b0dd-f03c54f1065d","Type":"ContainerStarted","Data":"699a53ab09715ba0331321cd4b00a8005d994e157b4bcf5de570efcf0f5510ff"} Dec 10 12:21:25 crc kubenswrapper[4689]: I1210 12:21:25.629014 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-5vszf" Dec 10 12:21:25 crc kubenswrapper[4689]: I1210 12:21:25.633750 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-5vszf" Dec 10 12:21:25 crc kubenswrapper[4689]: I1210 12:21:25.654492 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6c77d95457-5vszf" podStartSLOduration=3.65446329 podStartE2EDuration="3.65446329s" podCreationTimestamp="2025-12-10 12:21:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:21:25.649940113 +0000 UTC m=+353.438021261" watchObservedRunningTime="2025-12-10 12:21:25.65446329 +0000 UTC m=+353.442544448" Dec 10 12:21:33 crc kubenswrapper[4689]: I1210 12:21:33.760389 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-8v6mm" Dec 10 12:21:33 crc kubenswrapper[4689]: I1210 12:21:33.816706 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jw2qz"] Dec 10 12:21:36 crc kubenswrapper[4689]: I1210 12:21:36.553869 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ks26t"] Dec 10 12:21:36 crc kubenswrapper[4689]: I1210 12:21:36.554783 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ks26t" podUID="c8d409aa-8f6d-4ed5-816c-e572e371d425" containerName="registry-server" containerID="cri-o://a946b96c40a3419ede8d391181b1b722ccbf17d45a9c8928c218593dd99b6e6a" gracePeriod=30 Dec 10 12:21:36 crc kubenswrapper[4689]: I1210 12:21:36.564090 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rkh5c"] Dec 10 12:21:36 crc kubenswrapper[4689]: I1210 12:21:36.564413 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rkh5c" podUID="d51b889b-7485-4c32-84de-3ddfd7ce23e9" containerName="registry-server" containerID="cri-o://238a193f555fb45659447c913c44b694040adb4977f7082de9a50e6f441f97fd" gracePeriod=30 Dec 10 12:21:36 crc kubenswrapper[4689]: I1210 12:21:36.586192 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9wlmb"] Dec 10 12:21:36 crc kubenswrapper[4689]: I1210 12:21:36.586429 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-9wlmb" podUID="88d80bbe-a9ab-4c91-b0b2-485e106dd150" containerName="marketplace-operator" containerID="cri-o://5f2b487045cdea0bc6cfe8786a69403e961b1587cd97296161234fa4626b14ff" gracePeriod=30 Dec 10 12:21:36 crc kubenswrapper[4689]: I1210 12:21:36.610091 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ckfqs"] Dec 10 12:21:36 crc kubenswrapper[4689]: I1210 12:21:36.616695 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ckfqs" Dec 10 12:21:36 crc kubenswrapper[4689]: I1210 12:21:36.637541 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6whbg"] Dec 10 12:21:36 crc kubenswrapper[4689]: I1210 12:21:36.637767 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6whbg" podUID="23db100b-85ac-48e2-834b-741c9d94cf8f" containerName="registry-server" containerID="cri-o://c9b8b0ffe112ba6eacb1e12926d0088d7152568c29cdbf6319b0ab22a15bd941" gracePeriod=30 Dec 10 12:21:36 crc kubenswrapper[4689]: I1210 12:21:36.645210 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ckfqs"] Dec 10 12:21:36 crc kubenswrapper[4689]: I1210 12:21:36.649371 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9hpt8"] Dec 10 12:21:36 crc kubenswrapper[4689]: I1210 12:21:36.649607 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9hpt8" podUID="2ebd1829-eb7a-4128-ab9d-79a1f75fa39e" containerName="registry-server" containerID="cri-o://64ed0b2ce9390d1b998e0fa118980bd26e4b95e7b75f6568e4e4b85d982863c8" gracePeriod=30 Dec 10 12:21:36 crc kubenswrapper[4689]: I1210 12:21:36.810304 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2fe57bc1-cf21-44d7-b6ca-54319a03a415-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ckfqs\" (UID: \"2fe57bc1-cf21-44d7-b6ca-54319a03a415\") " pod="openshift-marketplace/marketplace-operator-79b997595-ckfqs" Dec 10 12:21:36 crc kubenswrapper[4689]: I1210 12:21:36.810384 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2npp6\" (UniqueName: \"kubernetes.io/projected/2fe57bc1-cf21-44d7-b6ca-54319a03a415-kube-api-access-2npp6\") pod \"marketplace-operator-79b997595-ckfqs\" (UID: \"2fe57bc1-cf21-44d7-b6ca-54319a03a415\") " pod="openshift-marketplace/marketplace-operator-79b997595-ckfqs" Dec 10 12:21:36 crc kubenswrapper[4689]: I1210 12:21:36.810465 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2fe57bc1-cf21-44d7-b6ca-54319a03a415-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ckfqs\" (UID: \"2fe57bc1-cf21-44d7-b6ca-54319a03a415\") " pod="openshift-marketplace/marketplace-operator-79b997595-ckfqs" Dec 10 12:21:36 crc kubenswrapper[4689]: I1210 12:21:36.911768 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2fe57bc1-cf21-44d7-b6ca-54319a03a415-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ckfqs\" (UID: \"2fe57bc1-cf21-44d7-b6ca-54319a03a415\") " pod="openshift-marketplace/marketplace-operator-79b997595-ckfqs" Dec 10 12:21:36 crc kubenswrapper[4689]: I1210 12:21:36.911854 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2npp6\" (UniqueName: \"kubernetes.io/projected/2fe57bc1-cf21-44d7-b6ca-54319a03a415-kube-api-access-2npp6\") pod \"marketplace-operator-79b997595-ckfqs\" (UID: \"2fe57bc1-cf21-44d7-b6ca-54319a03a415\") " pod="openshift-marketplace/marketplace-operator-79b997595-ckfqs" Dec 10 12:21:36 crc kubenswrapper[4689]: I1210 12:21:36.911903 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2fe57bc1-cf21-44d7-b6ca-54319a03a415-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ckfqs\" (UID: \"2fe57bc1-cf21-44d7-b6ca-54319a03a415\") " pod="openshift-marketplace/marketplace-operator-79b997595-ckfqs" Dec 10 12:21:36 crc kubenswrapper[4689]: I1210 12:21:36.913858 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2fe57bc1-cf21-44d7-b6ca-54319a03a415-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ckfqs\" (UID: \"2fe57bc1-cf21-44d7-b6ca-54319a03a415\") " pod="openshift-marketplace/marketplace-operator-79b997595-ckfqs" Dec 10 12:21:36 crc kubenswrapper[4689]: I1210 12:21:36.918153 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2fe57bc1-cf21-44d7-b6ca-54319a03a415-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ckfqs\" (UID: \"2fe57bc1-cf21-44d7-b6ca-54319a03a415\") " pod="openshift-marketplace/marketplace-operator-79b997595-ckfqs" Dec 10 12:21:36 crc kubenswrapper[4689]: I1210 12:21:36.941224 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2npp6\" (UniqueName: \"kubernetes.io/projected/2fe57bc1-cf21-44d7-b6ca-54319a03a415-kube-api-access-2npp6\") pod \"marketplace-operator-79b997595-ckfqs\" (UID: \"2fe57bc1-cf21-44d7-b6ca-54319a03a415\") " pod="openshift-marketplace/marketplace-operator-79b997595-ckfqs" Dec 10 12:21:36 crc kubenswrapper[4689]: I1210 12:21:36.941473 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ckfqs" Dec 10 12:21:37 crc kubenswrapper[4689]: I1210 12:21:37.166655 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 12:21:37 crc kubenswrapper[4689]: I1210 12:21:37.166742 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 12:21:37 crc kubenswrapper[4689]: I1210 12:21:37.392215 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ckfqs"] Dec 10 12:21:37 crc kubenswrapper[4689]: W1210 12:21:37.402387 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fe57bc1_cf21_44d7_b6ca_54319a03a415.slice/crio-4e3fcca147a6ef46a5147ee23a23f1320123001374bf375a35e6ed268d20f4b0 WatchSource:0}: Error finding container 4e3fcca147a6ef46a5147ee23a23f1320123001374bf375a35e6ed268d20f4b0: Status 404 returned error can't find the container with id 4e3fcca147a6ef46a5147ee23a23f1320123001374bf375a35e6ed268d20f4b0 Dec 10 12:21:37 crc kubenswrapper[4689]: I1210 12:21:37.695590 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ckfqs" event={"ID":"2fe57bc1-cf21-44d7-b6ca-54319a03a415","Type":"ContainerStarted","Data":"4e3fcca147a6ef46a5147ee23a23f1320123001374bf375a35e6ed268d20f4b0"} Dec 10 12:21:37 crc kubenswrapper[4689]: I1210 12:21:37.698486 4689 generic.go:334] "Generic (PLEG): container finished" podID="c8d409aa-8f6d-4ed5-816c-e572e371d425" containerID="a946b96c40a3419ede8d391181b1b722ccbf17d45a9c8928c218593dd99b6e6a" exitCode=0 Dec 10 12:21:37 crc kubenswrapper[4689]: I1210 12:21:37.698527 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ks26t" event={"ID":"c8d409aa-8f6d-4ed5-816c-e572e371d425","Type":"ContainerDied","Data":"a946b96c40a3419ede8d391181b1b722ccbf17d45a9c8928c218593dd99b6e6a"} Dec 10 12:21:37 crc kubenswrapper[4689]: I1210 12:21:37.824349 4689 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9wlmb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 10 12:21:37 crc kubenswrapper[4689]: I1210 12:21:37.824419 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9wlmb" podUID="88d80bbe-a9ab-4c91-b0b2-485e106dd150" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 10 12:21:38 crc kubenswrapper[4689]: I1210 12:21:38.580353 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ks26t" Dec 10 12:21:38 crc kubenswrapper[4689]: I1210 12:21:38.708749 4689 generic.go:334] "Generic (PLEG): container finished" podID="88d80bbe-a9ab-4c91-b0b2-485e106dd150" containerID="5f2b487045cdea0bc6cfe8786a69403e961b1587cd97296161234fa4626b14ff" exitCode=0 Dec 10 12:21:38 crc kubenswrapper[4689]: I1210 12:21:38.708801 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9wlmb" event={"ID":"88d80bbe-a9ab-4c91-b0b2-485e106dd150","Type":"ContainerDied","Data":"5f2b487045cdea0bc6cfe8786a69403e961b1587cd97296161234fa4626b14ff"} Dec 10 12:21:38 crc kubenswrapper[4689]: I1210 12:21:38.708849 4689 scope.go:117] "RemoveContainer" containerID="959b0b1179d522d266cd5353b9313a9bb5dd0f748cb8f893bc5802f46f7896b8" Dec 10 12:21:38 crc kubenswrapper[4689]: I1210 12:21:38.716260 4689 generic.go:334] "Generic (PLEG): container finished" podID="23db100b-85ac-48e2-834b-741c9d94cf8f" containerID="c9b8b0ffe112ba6eacb1e12926d0088d7152568c29cdbf6319b0ab22a15bd941" exitCode=0 Dec 10 12:21:38 crc kubenswrapper[4689]: I1210 12:21:38.716360 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6whbg" event={"ID":"23db100b-85ac-48e2-834b-741c9d94cf8f","Type":"ContainerDied","Data":"c9b8b0ffe112ba6eacb1e12926d0088d7152568c29cdbf6319b0ab22a15bd941"} Dec 10 12:21:38 crc kubenswrapper[4689]: I1210 12:21:38.725456 4689 generic.go:334] "Generic (PLEG): container finished" podID="d51b889b-7485-4c32-84de-3ddfd7ce23e9" containerID="238a193f555fb45659447c913c44b694040adb4977f7082de9a50e6f441f97fd" exitCode=0 Dec 10 12:21:38 crc kubenswrapper[4689]: I1210 12:21:38.725596 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkh5c" event={"ID":"d51b889b-7485-4c32-84de-3ddfd7ce23e9","Type":"ContainerDied","Data":"238a193f555fb45659447c913c44b694040adb4977f7082de9a50e6f441f97fd"} Dec 10 12:21:38 crc kubenswrapper[4689]: I1210 12:21:38.738001 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8d409aa-8f6d-4ed5-816c-e572e371d425-catalog-content\") pod \"c8d409aa-8f6d-4ed5-816c-e572e371d425\" (UID: \"c8d409aa-8f6d-4ed5-816c-e572e371d425\") " Dec 10 12:21:38 crc kubenswrapper[4689]: I1210 12:21:38.738066 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8d409aa-8f6d-4ed5-816c-e572e371d425-utilities\") pod \"c8d409aa-8f6d-4ed5-816c-e572e371d425\" (UID: \"c8d409aa-8f6d-4ed5-816c-e572e371d425\") " Dec 10 12:21:38 crc kubenswrapper[4689]: I1210 12:21:38.738143 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bccn\" (UniqueName: \"kubernetes.io/projected/c8d409aa-8f6d-4ed5-816c-e572e371d425-kube-api-access-9bccn\") pod \"c8d409aa-8f6d-4ed5-816c-e572e371d425\" (UID: \"c8d409aa-8f6d-4ed5-816c-e572e371d425\") " Dec 10 12:21:38 crc kubenswrapper[4689]: I1210 12:21:38.740175 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ckfqs" event={"ID":"2fe57bc1-cf21-44d7-b6ca-54319a03a415","Type":"ContainerStarted","Data":"a03ae78a2e7868f230488a0c358f2b8eae3ee6215fc910e6641583f407d501cb"} Dec 10 12:21:38 crc kubenswrapper[4689]: I1210 12:21:38.740488 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8d409aa-8f6d-4ed5-816c-e572e371d425-utilities" (OuterVolumeSpecName: "utilities") pod "c8d409aa-8f6d-4ed5-816c-e572e371d425" (UID: "c8d409aa-8f6d-4ed5-816c-e572e371d425"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:21:38 crc kubenswrapper[4689]: I1210 12:21:38.743062 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ks26t" event={"ID":"c8d409aa-8f6d-4ed5-816c-e572e371d425","Type":"ContainerDied","Data":"f5e09495901baaa3f79dc6dadbb3de39ba949a1d04659b9e4b8f36b6e25185bf"} Dec 10 12:21:38 crc kubenswrapper[4689]: I1210 12:21:38.743083 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ks26t" Dec 10 12:21:38 crc kubenswrapper[4689]: I1210 12:21:38.746170 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8d409aa-8f6d-4ed5-816c-e572e371d425-kube-api-access-9bccn" (OuterVolumeSpecName: "kube-api-access-9bccn") pod "c8d409aa-8f6d-4ed5-816c-e572e371d425" (UID: "c8d409aa-8f6d-4ed5-816c-e572e371d425"). InnerVolumeSpecName "kube-api-access-9bccn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:21:38 crc kubenswrapper[4689]: I1210 12:21:38.748772 4689 generic.go:334] "Generic (PLEG): container finished" podID="2ebd1829-eb7a-4128-ab9d-79a1f75fa39e" containerID="64ed0b2ce9390d1b998e0fa118980bd26e4b95e7b75f6568e4e4b85d982863c8" exitCode=0 Dec 10 12:21:38 crc kubenswrapper[4689]: I1210 12:21:38.748802 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9hpt8" event={"ID":"2ebd1829-eb7a-4128-ab9d-79a1f75fa39e","Type":"ContainerDied","Data":"64ed0b2ce9390d1b998e0fa118980bd26e4b95e7b75f6568e4e4b85d982863c8"} Dec 10 12:21:38 crc kubenswrapper[4689]: I1210 12:21:38.787904 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8d409aa-8f6d-4ed5-816c-e572e371d425-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8d409aa-8f6d-4ed5-816c-e572e371d425" (UID: "c8d409aa-8f6d-4ed5-816c-e572e371d425"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:21:38 crc kubenswrapper[4689]: I1210 12:21:38.840613 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8d409aa-8f6d-4ed5-816c-e572e371d425-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:38 crc kubenswrapper[4689]: I1210 12:21:38.840653 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8d409aa-8f6d-4ed5-816c-e572e371d425-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:38 crc kubenswrapper[4689]: I1210 12:21:38.840677 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bccn\" (UniqueName: \"kubernetes.io/projected/c8d409aa-8f6d-4ed5-816c-e572e371d425-kube-api-access-9bccn\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.085890 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ks26t"] Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.091764 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ks26t"] Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.351379 4689 scope.go:117] "RemoveContainer" containerID="a946b96c40a3419ede8d391181b1b722ccbf17d45a9c8928c218593dd99b6e6a" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.372694 4689 scope.go:117] "RemoveContainer" containerID="059e8153783f027bbbe033a8a38ef043e0a427a24c4350ab8f8331086abbfe69" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.393711 4689 scope.go:117] "RemoveContainer" containerID="b0919d0932abaa3581310de974beb14f08c4f90cfd4dce78ac7beb7d5ee1ecfb" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.634346 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rkh5c" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.690013 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6whbg" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.691054 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9wlmb" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.700456 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9hpt8" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.755122 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d51b889b-7485-4c32-84de-3ddfd7ce23e9-catalog-content\") pod \"d51b889b-7485-4c32-84de-3ddfd7ce23e9\" (UID: \"d51b889b-7485-4c32-84de-3ddfd7ce23e9\") " Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.755229 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sjfd\" (UniqueName: \"kubernetes.io/projected/d51b889b-7485-4c32-84de-3ddfd7ce23e9-kube-api-access-5sjfd\") pod \"d51b889b-7485-4c32-84de-3ddfd7ce23e9\" (UID: \"d51b889b-7485-4c32-84de-3ddfd7ce23e9\") " Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.755293 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d51b889b-7485-4c32-84de-3ddfd7ce23e9-utilities\") pod \"d51b889b-7485-4c32-84de-3ddfd7ce23e9\" (UID: \"d51b889b-7485-4c32-84de-3ddfd7ce23e9\") " Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.756367 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d51b889b-7485-4c32-84de-3ddfd7ce23e9-utilities" (OuterVolumeSpecName: "utilities") pod "d51b889b-7485-4c32-84de-3ddfd7ce23e9" (UID: "d51b889b-7485-4c32-84de-3ddfd7ce23e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.762345 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d51b889b-7485-4c32-84de-3ddfd7ce23e9-kube-api-access-5sjfd" (OuterVolumeSpecName: "kube-api-access-5sjfd") pod "d51b889b-7485-4c32-84de-3ddfd7ce23e9" (UID: "d51b889b-7485-4c32-84de-3ddfd7ce23e9"). InnerVolumeSpecName "kube-api-access-5sjfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.764471 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9hpt8" event={"ID":"2ebd1829-eb7a-4128-ab9d-79a1f75fa39e","Type":"ContainerDied","Data":"8e634e111acae47c25577d040359103870e2f4d9c00bbe06c93c2732170af46d"} Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.764498 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9hpt8" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.764526 4689 scope.go:117] "RemoveContainer" containerID="64ed0b2ce9390d1b998e0fa118980bd26e4b95e7b75f6568e4e4b85d982863c8" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.766482 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9wlmb" event={"ID":"88d80bbe-a9ab-4c91-b0b2-485e106dd150","Type":"ContainerDied","Data":"819a8a68360e50fdd9602a6f72bb09c32c287cc688df27c76a6cea175d88e616"} Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.766532 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9wlmb" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.768603 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6whbg" event={"ID":"23db100b-85ac-48e2-834b-741c9d94cf8f","Type":"ContainerDied","Data":"76e5943bb26a60c123c65fd4d3aeb7b6a224343282216711b5c7ae184abd9906"} Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.768651 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6whbg" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.771936 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkh5c" event={"ID":"d51b889b-7485-4c32-84de-3ddfd7ce23e9","Type":"ContainerDied","Data":"8608b9c25eb95a7262721ca9581607fdc9cf66d8fc9da0d98031f73b537f5f10"} Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.772101 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rkh5c" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.772118 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ckfqs" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.774669 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ckfqs" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.785721 4689 scope.go:117] "RemoveContainer" containerID="c5d4f47d0e6bd36cf20e82d2a1314d2a24cf791894f3d04b5a6e9ae0ee9ef414" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.788228 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-ckfqs" podStartSLOduration=3.7882069080000003 podStartE2EDuration="3.788206908s" podCreationTimestamp="2025-12-10 12:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:21:39.785399385 +0000 UTC m=+367.573480543" watchObservedRunningTime="2025-12-10 12:21:39.788206908 +0000 UTC m=+367.576288046" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.815572 4689 scope.go:117] "RemoveContainer" containerID="12d0bde844504c3fe38d57afd29b65a25f56fed6cc2e2a01594b03339a1713dc" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.823638 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d51b889b-7485-4c32-84de-3ddfd7ce23e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d51b889b-7485-4c32-84de-3ddfd7ce23e9" (UID: "d51b889b-7485-4c32-84de-3ddfd7ce23e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.832921 4689 scope.go:117] "RemoveContainer" containerID="5f2b487045cdea0bc6cfe8786a69403e961b1587cd97296161234fa4626b14ff" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.850408 4689 scope.go:117] "RemoveContainer" containerID="c9b8b0ffe112ba6eacb1e12926d0088d7152568c29cdbf6319b0ab22a15bd941" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.856318 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23db100b-85ac-48e2-834b-741c9d94cf8f-utilities\") pod \"23db100b-85ac-48e2-834b-741c9d94cf8f\" (UID: \"23db100b-85ac-48e2-834b-741c9d94cf8f\") " Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.856367 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlgk4\" (UniqueName: \"kubernetes.io/projected/2ebd1829-eb7a-4128-ab9d-79a1f75fa39e-kube-api-access-zlgk4\") pod \"2ebd1829-eb7a-4128-ab9d-79a1f75fa39e\" (UID: \"2ebd1829-eb7a-4128-ab9d-79a1f75fa39e\") " Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.856409 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/88d80bbe-a9ab-4c91-b0b2-485e106dd150-marketplace-operator-metrics\") pod \"88d80bbe-a9ab-4c91-b0b2-485e106dd150\" (UID: \"88d80bbe-a9ab-4c91-b0b2-485e106dd150\") " Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.856432 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ebd1829-eb7a-4128-ab9d-79a1f75fa39e-utilities\") pod \"2ebd1829-eb7a-4128-ab9d-79a1f75fa39e\" (UID: \"2ebd1829-eb7a-4128-ab9d-79a1f75fa39e\") " Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.856450 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/88d80bbe-a9ab-4c91-b0b2-485e106dd150-marketplace-trusted-ca\") pod \"88d80bbe-a9ab-4c91-b0b2-485e106dd150\" (UID: \"88d80bbe-a9ab-4c91-b0b2-485e106dd150\") " Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.856481 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdscs\" (UniqueName: \"kubernetes.io/projected/23db100b-85ac-48e2-834b-741c9d94cf8f-kube-api-access-tdscs\") pod \"23db100b-85ac-48e2-834b-741c9d94cf8f\" (UID: \"23db100b-85ac-48e2-834b-741c9d94cf8f\") " Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.856505 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b77tc\" (UniqueName: \"kubernetes.io/projected/88d80bbe-a9ab-4c91-b0b2-485e106dd150-kube-api-access-b77tc\") pod \"88d80bbe-a9ab-4c91-b0b2-485e106dd150\" (UID: \"88d80bbe-a9ab-4c91-b0b2-485e106dd150\") " Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.856527 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23db100b-85ac-48e2-834b-741c9d94cf8f-catalog-content\") pod \"23db100b-85ac-48e2-834b-741c9d94cf8f\" (UID: \"23db100b-85ac-48e2-834b-741c9d94cf8f\") " Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.856572 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ebd1829-eb7a-4128-ab9d-79a1f75fa39e-catalog-content\") pod \"2ebd1829-eb7a-4128-ab9d-79a1f75fa39e\" (UID: \"2ebd1829-eb7a-4128-ab9d-79a1f75fa39e\") " Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.856773 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sjfd\" (UniqueName: \"kubernetes.io/projected/d51b889b-7485-4c32-84de-3ddfd7ce23e9-kube-api-access-5sjfd\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.856790 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d51b889b-7485-4c32-84de-3ddfd7ce23e9-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.856799 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d51b889b-7485-4c32-84de-3ddfd7ce23e9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.857495 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23db100b-85ac-48e2-834b-741c9d94cf8f-utilities" (OuterVolumeSpecName: "utilities") pod "23db100b-85ac-48e2-834b-741c9d94cf8f" (UID: "23db100b-85ac-48e2-834b-741c9d94cf8f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.857645 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ebd1829-eb7a-4128-ab9d-79a1f75fa39e-utilities" (OuterVolumeSpecName: "utilities") pod "2ebd1829-eb7a-4128-ab9d-79a1f75fa39e" (UID: "2ebd1829-eb7a-4128-ab9d-79a1f75fa39e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.858033 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88d80bbe-a9ab-4c91-b0b2-485e106dd150-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "88d80bbe-a9ab-4c91-b0b2-485e106dd150" (UID: "88d80bbe-a9ab-4c91-b0b2-485e106dd150"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.860815 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88d80bbe-a9ab-4c91-b0b2-485e106dd150-kube-api-access-b77tc" (OuterVolumeSpecName: "kube-api-access-b77tc") pod "88d80bbe-a9ab-4c91-b0b2-485e106dd150" (UID: "88d80bbe-a9ab-4c91-b0b2-485e106dd150"). InnerVolumeSpecName "kube-api-access-b77tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.860989 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88d80bbe-a9ab-4c91-b0b2-485e106dd150-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "88d80bbe-a9ab-4c91-b0b2-485e106dd150" (UID: "88d80bbe-a9ab-4c91-b0b2-485e106dd150"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.861664 4689 scope.go:117] "RemoveContainer" containerID="6b271ff39a1b946fd98129c09345cee53c3d1126e3ba3c50af35922721535a36" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.865070 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ebd1829-eb7a-4128-ab9d-79a1f75fa39e-kube-api-access-zlgk4" (OuterVolumeSpecName: "kube-api-access-zlgk4") pod "2ebd1829-eb7a-4128-ab9d-79a1f75fa39e" (UID: "2ebd1829-eb7a-4128-ab9d-79a1f75fa39e"). InnerVolumeSpecName "kube-api-access-zlgk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.865850 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23db100b-85ac-48e2-834b-741c9d94cf8f-kube-api-access-tdscs" (OuterVolumeSpecName: "kube-api-access-tdscs") pod "23db100b-85ac-48e2-834b-741c9d94cf8f" (UID: "23db100b-85ac-48e2-834b-741c9d94cf8f"). InnerVolumeSpecName "kube-api-access-tdscs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.882386 4689 scope.go:117] "RemoveContainer" containerID="ebc9f213009dd45ec7e001774119fe6377e608b7b3e57e4c41451c4f74ecf141" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.892763 4689 scope.go:117] "RemoveContainer" containerID="238a193f555fb45659447c913c44b694040adb4977f7082de9a50e6f441f97fd" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.894381 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23db100b-85ac-48e2-834b-741c9d94cf8f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23db100b-85ac-48e2-834b-741c9d94cf8f" (UID: "23db100b-85ac-48e2-834b-741c9d94cf8f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.905260 4689 scope.go:117] "RemoveContainer" containerID="0dcfa0584d0a814175c312f2a9bfb894ef7e593663a4490743fca644d604dd0a" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.922070 4689 scope.go:117] "RemoveContainer" containerID="7434631ed32ff2547826b0b814e6ecf84c9617541576bbd66c3d9f6f5f5f9a80" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.958216 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23db100b-85ac-48e2-834b-741c9d94cf8f-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.958248 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlgk4\" (UniqueName: \"kubernetes.io/projected/2ebd1829-eb7a-4128-ab9d-79a1f75fa39e-kube-api-access-zlgk4\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.958263 4689 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/88d80bbe-a9ab-4c91-b0b2-485e106dd150-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.958276 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ebd1829-eb7a-4128-ab9d-79a1f75fa39e-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.958288 4689 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/88d80bbe-a9ab-4c91-b0b2-485e106dd150-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.958300 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdscs\" (UniqueName: \"kubernetes.io/projected/23db100b-85ac-48e2-834b-741c9d94cf8f-kube-api-access-tdscs\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.958311 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b77tc\" (UniqueName: \"kubernetes.io/projected/88d80bbe-a9ab-4c91-b0b2-485e106dd150-kube-api-access-b77tc\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.958322 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23db100b-85ac-48e2-834b-741c9d94cf8f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:39 crc kubenswrapper[4689]: I1210 12:21:39.975782 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ebd1829-eb7a-4128-ab9d-79a1f75fa39e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ebd1829-eb7a-4128-ab9d-79a1f75fa39e" (UID: "2ebd1829-eb7a-4128-ab9d-79a1f75fa39e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.059730 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ebd1829-eb7a-4128-ab9d-79a1f75fa39e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.102631 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9wlmb"] Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.116454 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9wlmb"] Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.122173 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9hpt8"] Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.125021 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9hpt8"] Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.140201 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6whbg"] Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.149740 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6whbg"] Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.157198 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rkh5c"] Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.161274 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rkh5c"] Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.505818 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23db100b-85ac-48e2-834b-741c9d94cf8f" path="/var/lib/kubelet/pods/23db100b-85ac-48e2-834b-741c9d94cf8f/volumes" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.506625 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ebd1829-eb7a-4128-ab9d-79a1f75fa39e" path="/var/lib/kubelet/pods/2ebd1829-eb7a-4128-ab9d-79a1f75fa39e/volumes" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.507428 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88d80bbe-a9ab-4c91-b0b2-485e106dd150" path="/var/lib/kubelet/pods/88d80bbe-a9ab-4c91-b0b2-485e106dd150/volumes" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.508594 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8d409aa-8f6d-4ed5-816c-e572e371d425" path="/var/lib/kubelet/pods/c8d409aa-8f6d-4ed5-816c-e572e371d425/volumes" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.509396 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d51b889b-7485-4c32-84de-3ddfd7ce23e9" path="/var/lib/kubelet/pods/d51b889b-7485-4c32-84de-3ddfd7ce23e9/volumes" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.674679 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jqvbb"] Dec 10 12:21:40 crc kubenswrapper[4689]: E1210 12:21:40.674914 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ebd1829-eb7a-4128-ab9d-79a1f75fa39e" containerName="registry-server" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.674931 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ebd1829-eb7a-4128-ab9d-79a1f75fa39e" containerName="registry-server" Dec 10 12:21:40 crc kubenswrapper[4689]: E1210 12:21:40.674945 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ebd1829-eb7a-4128-ab9d-79a1f75fa39e" containerName="extract-utilities" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.674956 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ebd1829-eb7a-4128-ab9d-79a1f75fa39e" containerName="extract-utilities" Dec 10 12:21:40 crc kubenswrapper[4689]: E1210 12:21:40.674985 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8d409aa-8f6d-4ed5-816c-e572e371d425" containerName="extract-content" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.674996 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8d409aa-8f6d-4ed5-816c-e572e371d425" containerName="extract-content" Dec 10 12:21:40 crc kubenswrapper[4689]: E1210 12:21:40.675009 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8d409aa-8f6d-4ed5-816c-e572e371d425" containerName="registry-server" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.675017 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8d409aa-8f6d-4ed5-816c-e572e371d425" containerName="registry-server" Dec 10 12:21:40 crc kubenswrapper[4689]: E1210 12:21:40.675027 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23db100b-85ac-48e2-834b-741c9d94cf8f" containerName="extract-content" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.675035 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="23db100b-85ac-48e2-834b-741c9d94cf8f" containerName="extract-content" Dec 10 12:21:40 crc kubenswrapper[4689]: E1210 12:21:40.675046 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88d80bbe-a9ab-4c91-b0b2-485e106dd150" containerName="marketplace-operator" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.675055 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="88d80bbe-a9ab-4c91-b0b2-485e106dd150" containerName="marketplace-operator" Dec 10 12:21:40 crc kubenswrapper[4689]: E1210 12:21:40.675065 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8d409aa-8f6d-4ed5-816c-e572e371d425" containerName="extract-utilities" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.675072 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8d409aa-8f6d-4ed5-816c-e572e371d425" containerName="extract-utilities" Dec 10 12:21:40 crc kubenswrapper[4689]: E1210 12:21:40.675085 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88d80bbe-a9ab-4c91-b0b2-485e106dd150" containerName="marketplace-operator" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.675093 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="88d80bbe-a9ab-4c91-b0b2-485e106dd150" containerName="marketplace-operator" Dec 10 12:21:40 crc kubenswrapper[4689]: E1210 12:21:40.675103 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23db100b-85ac-48e2-834b-741c9d94cf8f" containerName="extract-utilities" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.675112 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="23db100b-85ac-48e2-834b-741c9d94cf8f" containerName="extract-utilities" Dec 10 12:21:40 crc kubenswrapper[4689]: E1210 12:21:40.675122 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ebd1829-eb7a-4128-ab9d-79a1f75fa39e" containerName="extract-content" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.675149 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ebd1829-eb7a-4128-ab9d-79a1f75fa39e" containerName="extract-content" Dec 10 12:21:40 crc kubenswrapper[4689]: E1210 12:21:40.675160 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23db100b-85ac-48e2-834b-741c9d94cf8f" containerName="registry-server" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.675168 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="23db100b-85ac-48e2-834b-741c9d94cf8f" containerName="registry-server" Dec 10 12:21:40 crc kubenswrapper[4689]: E1210 12:21:40.675177 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d51b889b-7485-4c32-84de-3ddfd7ce23e9" containerName="extract-content" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.675186 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="d51b889b-7485-4c32-84de-3ddfd7ce23e9" containerName="extract-content" Dec 10 12:21:40 crc kubenswrapper[4689]: E1210 12:21:40.675200 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d51b889b-7485-4c32-84de-3ddfd7ce23e9" containerName="extract-utilities" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.675209 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="d51b889b-7485-4c32-84de-3ddfd7ce23e9" containerName="extract-utilities" Dec 10 12:21:40 crc kubenswrapper[4689]: E1210 12:21:40.675219 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d51b889b-7485-4c32-84de-3ddfd7ce23e9" containerName="registry-server" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.675227 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="d51b889b-7485-4c32-84de-3ddfd7ce23e9" containerName="registry-server" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.675348 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ebd1829-eb7a-4128-ab9d-79a1f75fa39e" containerName="registry-server" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.675366 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8d409aa-8f6d-4ed5-816c-e572e371d425" containerName="registry-server" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.675374 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="88d80bbe-a9ab-4c91-b0b2-485e106dd150" containerName="marketplace-operator" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.675390 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="23db100b-85ac-48e2-834b-741c9d94cf8f" containerName="registry-server" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.675398 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="d51b889b-7485-4c32-84de-3ddfd7ce23e9" containerName="registry-server" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.675631 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="88d80bbe-a9ab-4c91-b0b2-485e106dd150" containerName="marketplace-operator" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.676269 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqvbb" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.678928 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.683005 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jqvbb"] Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.768422 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-497jk\" (UniqueName: \"kubernetes.io/projected/4cace366-d916-470c-9cb6-090b7ed04bcb-kube-api-access-497jk\") pod \"certified-operators-jqvbb\" (UID: \"4cace366-d916-470c-9cb6-090b7ed04bcb\") " pod="openshift-marketplace/certified-operators-jqvbb" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.769175 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cace366-d916-470c-9cb6-090b7ed04bcb-catalog-content\") pod \"certified-operators-jqvbb\" (UID: \"4cace366-d916-470c-9cb6-090b7ed04bcb\") " pod="openshift-marketplace/certified-operators-jqvbb" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.769294 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cace366-d916-470c-9cb6-090b7ed04bcb-utilities\") pod \"certified-operators-jqvbb\" (UID: \"4cace366-d916-470c-9cb6-090b7ed04bcb\") " pod="openshift-marketplace/certified-operators-jqvbb" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.871044 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-497jk\" (UniqueName: \"kubernetes.io/projected/4cace366-d916-470c-9cb6-090b7ed04bcb-kube-api-access-497jk\") pod \"certified-operators-jqvbb\" (UID: \"4cace366-d916-470c-9cb6-090b7ed04bcb\") " pod="openshift-marketplace/certified-operators-jqvbb" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.871134 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cace366-d916-470c-9cb6-090b7ed04bcb-catalog-content\") pod \"certified-operators-jqvbb\" (UID: \"4cace366-d916-470c-9cb6-090b7ed04bcb\") " pod="openshift-marketplace/certified-operators-jqvbb" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.871303 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cace366-d916-470c-9cb6-090b7ed04bcb-utilities\") pod \"certified-operators-jqvbb\" (UID: \"4cace366-d916-470c-9cb6-090b7ed04bcb\") " pod="openshift-marketplace/certified-operators-jqvbb" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.872397 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cace366-d916-470c-9cb6-090b7ed04bcb-utilities\") pod \"certified-operators-jqvbb\" (UID: \"4cace366-d916-470c-9cb6-090b7ed04bcb\") " pod="openshift-marketplace/certified-operators-jqvbb" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.873438 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cace366-d916-470c-9cb6-090b7ed04bcb-catalog-content\") pod \"certified-operators-jqvbb\" (UID: \"4cace366-d916-470c-9cb6-090b7ed04bcb\") " pod="openshift-marketplace/certified-operators-jqvbb" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.901789 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-497jk\" (UniqueName: \"kubernetes.io/projected/4cace366-d916-470c-9cb6-090b7ed04bcb-kube-api-access-497jk\") pod \"certified-operators-jqvbb\" (UID: \"4cace366-d916-470c-9cb6-090b7ed04bcb\") " pod="openshift-marketplace/certified-operators-jqvbb" Dec 10 12:21:40 crc kubenswrapper[4689]: I1210 12:21:40.994087 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqvbb" Dec 10 12:21:41 crc kubenswrapper[4689]: I1210 12:21:41.225248 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jqvbb"] Dec 10 12:21:41 crc kubenswrapper[4689]: W1210 12:21:41.231219 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cace366_d916_470c_9cb6_090b7ed04bcb.slice/crio-4110cfef44e51074ee72471a99e80ec6cd59441b4ffff7f0bd73a403dc056bff WatchSource:0}: Error finding container 4110cfef44e51074ee72471a99e80ec6cd59441b4ffff7f0bd73a403dc056bff: Status 404 returned error can't find the container with id 4110cfef44e51074ee72471a99e80ec6cd59441b4ffff7f0bd73a403dc056bff Dec 10 12:21:41 crc kubenswrapper[4689]: I1210 12:21:41.658819 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vj842"] Dec 10 12:21:41 crc kubenswrapper[4689]: I1210 12:21:41.660583 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vj842" Dec 10 12:21:41 crc kubenswrapper[4689]: I1210 12:21:41.662365 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 10 12:21:41 crc kubenswrapper[4689]: I1210 12:21:41.676172 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vj842"] Dec 10 12:21:41 crc kubenswrapper[4689]: I1210 12:21:41.782363 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8b91596-d292-4ef0-bb0d-92cb3224c20c-catalog-content\") pod \"community-operators-vj842\" (UID: \"f8b91596-d292-4ef0-bb0d-92cb3224c20c\") " pod="openshift-marketplace/community-operators-vj842" Dec 10 12:21:41 crc kubenswrapper[4689]: I1210 12:21:41.782410 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9lvh\" (UniqueName: \"kubernetes.io/projected/f8b91596-d292-4ef0-bb0d-92cb3224c20c-kube-api-access-m9lvh\") pod \"community-operators-vj842\" (UID: \"f8b91596-d292-4ef0-bb0d-92cb3224c20c\") " pod="openshift-marketplace/community-operators-vj842" Dec 10 12:21:41 crc kubenswrapper[4689]: I1210 12:21:41.782472 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8b91596-d292-4ef0-bb0d-92cb3224c20c-utilities\") pod \"community-operators-vj842\" (UID: \"f8b91596-d292-4ef0-bb0d-92cb3224c20c\") " pod="openshift-marketplace/community-operators-vj842" Dec 10 12:21:41 crc kubenswrapper[4689]: I1210 12:21:41.790509 4689 generic.go:334] "Generic (PLEG): container finished" podID="4cace366-d916-470c-9cb6-090b7ed04bcb" containerID="9c6ffe09a52b9615b577d2c63de91920909dfc492833e4b6dcdb82761508dc82" exitCode=0 Dec 10 12:21:41 crc kubenswrapper[4689]: I1210 12:21:41.791082 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqvbb" event={"ID":"4cace366-d916-470c-9cb6-090b7ed04bcb","Type":"ContainerDied","Data":"9c6ffe09a52b9615b577d2c63de91920909dfc492833e4b6dcdb82761508dc82"} Dec 10 12:21:41 crc kubenswrapper[4689]: I1210 12:21:41.791190 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqvbb" event={"ID":"4cace366-d916-470c-9cb6-090b7ed04bcb","Type":"ContainerStarted","Data":"4110cfef44e51074ee72471a99e80ec6cd59441b4ffff7f0bd73a403dc056bff"} Dec 10 12:21:41 crc kubenswrapper[4689]: I1210 12:21:41.884120 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8b91596-d292-4ef0-bb0d-92cb3224c20c-utilities\") pod \"community-operators-vj842\" (UID: \"f8b91596-d292-4ef0-bb0d-92cb3224c20c\") " pod="openshift-marketplace/community-operators-vj842" Dec 10 12:21:41 crc kubenswrapper[4689]: I1210 12:21:41.884222 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8b91596-d292-4ef0-bb0d-92cb3224c20c-catalog-content\") pod \"community-operators-vj842\" (UID: \"f8b91596-d292-4ef0-bb0d-92cb3224c20c\") " pod="openshift-marketplace/community-operators-vj842" Dec 10 12:21:41 crc kubenswrapper[4689]: I1210 12:21:41.884257 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9lvh\" (UniqueName: \"kubernetes.io/projected/f8b91596-d292-4ef0-bb0d-92cb3224c20c-kube-api-access-m9lvh\") pod \"community-operators-vj842\" (UID: \"f8b91596-d292-4ef0-bb0d-92cb3224c20c\") " pod="openshift-marketplace/community-operators-vj842" Dec 10 12:21:41 crc kubenswrapper[4689]: I1210 12:21:41.884531 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8b91596-d292-4ef0-bb0d-92cb3224c20c-utilities\") pod \"community-operators-vj842\" (UID: \"f8b91596-d292-4ef0-bb0d-92cb3224c20c\") " pod="openshift-marketplace/community-operators-vj842" Dec 10 12:21:41 crc kubenswrapper[4689]: I1210 12:21:41.885437 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8b91596-d292-4ef0-bb0d-92cb3224c20c-catalog-content\") pod \"community-operators-vj842\" (UID: \"f8b91596-d292-4ef0-bb0d-92cb3224c20c\") " pod="openshift-marketplace/community-operators-vj842" Dec 10 12:21:41 crc kubenswrapper[4689]: I1210 12:21:41.917444 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9lvh\" (UniqueName: \"kubernetes.io/projected/f8b91596-d292-4ef0-bb0d-92cb3224c20c-kube-api-access-m9lvh\") pod \"community-operators-vj842\" (UID: \"f8b91596-d292-4ef0-bb0d-92cb3224c20c\") " pod="openshift-marketplace/community-operators-vj842" Dec 10 12:21:41 crc kubenswrapper[4689]: I1210 12:21:41.972749 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vj842" Dec 10 12:21:42 crc kubenswrapper[4689]: I1210 12:21:42.261321 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q"] Dec 10 12:21:42 crc kubenswrapper[4689]: I1210 12:21:42.261823 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q" podUID="9878739d-e4ed-446c-82e9-3bf95dee5f97" containerName="controller-manager" containerID="cri-o://35b93cecd19f61b8e87c3654028b144e9091df235a1c6c407fdf7223051bc828" gracePeriod=30 Dec 10 12:21:42 crc kubenswrapper[4689]: I1210 12:21:42.271732 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vj842"] Dec 10 12:21:42 crc kubenswrapper[4689]: I1210 12:21:42.600729 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q" Dec 10 12:21:42 crc kubenswrapper[4689]: I1210 12:21:42.692543 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9878739d-e4ed-446c-82e9-3bf95dee5f97-config\") pod \"9878739d-e4ed-446c-82e9-3bf95dee5f97\" (UID: \"9878739d-e4ed-446c-82e9-3bf95dee5f97\") " Dec 10 12:21:42 crc kubenswrapper[4689]: I1210 12:21:42.692590 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9878739d-e4ed-446c-82e9-3bf95dee5f97-proxy-ca-bundles\") pod \"9878739d-e4ed-446c-82e9-3bf95dee5f97\" (UID: \"9878739d-e4ed-446c-82e9-3bf95dee5f97\") " Dec 10 12:21:42 crc kubenswrapper[4689]: I1210 12:21:42.692632 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9878739d-e4ed-446c-82e9-3bf95dee5f97-serving-cert\") pod \"9878739d-e4ed-446c-82e9-3bf95dee5f97\" (UID: \"9878739d-e4ed-446c-82e9-3bf95dee5f97\") " Dec 10 12:21:42 crc kubenswrapper[4689]: I1210 12:21:42.692666 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9jxd\" (UniqueName: \"kubernetes.io/projected/9878739d-e4ed-446c-82e9-3bf95dee5f97-kube-api-access-t9jxd\") pod \"9878739d-e4ed-446c-82e9-3bf95dee5f97\" (UID: \"9878739d-e4ed-446c-82e9-3bf95dee5f97\") " Dec 10 12:21:42 crc kubenswrapper[4689]: I1210 12:21:42.692704 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9878739d-e4ed-446c-82e9-3bf95dee5f97-client-ca\") pod \"9878739d-e4ed-446c-82e9-3bf95dee5f97\" (UID: \"9878739d-e4ed-446c-82e9-3bf95dee5f97\") " Dec 10 12:21:42 crc kubenswrapper[4689]: I1210 12:21:42.693451 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9878739d-e4ed-446c-82e9-3bf95dee5f97-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9878739d-e4ed-446c-82e9-3bf95dee5f97" (UID: "9878739d-e4ed-446c-82e9-3bf95dee5f97"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:21:42 crc kubenswrapper[4689]: I1210 12:21:42.693473 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9878739d-e4ed-446c-82e9-3bf95dee5f97-client-ca" (OuterVolumeSpecName: "client-ca") pod "9878739d-e4ed-446c-82e9-3bf95dee5f97" (UID: "9878739d-e4ed-446c-82e9-3bf95dee5f97"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:21:42 crc kubenswrapper[4689]: I1210 12:21:42.693505 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9878739d-e4ed-446c-82e9-3bf95dee5f97-config" (OuterVolumeSpecName: "config") pod "9878739d-e4ed-446c-82e9-3bf95dee5f97" (UID: "9878739d-e4ed-446c-82e9-3bf95dee5f97"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:21:42 crc kubenswrapper[4689]: I1210 12:21:42.697598 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9878739d-e4ed-446c-82e9-3bf95dee5f97-kube-api-access-t9jxd" (OuterVolumeSpecName: "kube-api-access-t9jxd") pod "9878739d-e4ed-446c-82e9-3bf95dee5f97" (UID: "9878739d-e4ed-446c-82e9-3bf95dee5f97"). InnerVolumeSpecName "kube-api-access-t9jxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:21:42 crc kubenswrapper[4689]: I1210 12:21:42.697647 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9878739d-e4ed-446c-82e9-3bf95dee5f97-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9878739d-e4ed-446c-82e9-3bf95dee5f97" (UID: "9878739d-e4ed-446c-82e9-3bf95dee5f97"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:21:42 crc kubenswrapper[4689]: I1210 12:21:42.794239 4689 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9878739d-e4ed-446c-82e9-3bf95dee5f97-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:42 crc kubenswrapper[4689]: I1210 12:21:42.794285 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9878739d-e4ed-446c-82e9-3bf95dee5f97-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:42 crc kubenswrapper[4689]: I1210 12:21:42.794301 4689 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9878739d-e4ed-446c-82e9-3bf95dee5f97-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:42 crc kubenswrapper[4689]: I1210 12:21:42.794339 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9878739d-e4ed-446c-82e9-3bf95dee5f97-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:42 crc kubenswrapper[4689]: I1210 12:21:42.794358 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9jxd\" (UniqueName: \"kubernetes.io/projected/9878739d-e4ed-446c-82e9-3bf95dee5f97-kube-api-access-t9jxd\") on node \"crc\" DevicePath \"\"" Dec 10 12:21:42 crc kubenswrapper[4689]: I1210 12:21:42.803256 4689 generic.go:334] "Generic (PLEG): container finished" podID="9878739d-e4ed-446c-82e9-3bf95dee5f97" containerID="35b93cecd19f61b8e87c3654028b144e9091df235a1c6c407fdf7223051bc828" exitCode=0 Dec 10 12:21:42 crc kubenswrapper[4689]: I1210 12:21:42.803361 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q" Dec 10 12:21:42 crc kubenswrapper[4689]: I1210 12:21:42.803401 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q" event={"ID":"9878739d-e4ed-446c-82e9-3bf95dee5f97","Type":"ContainerDied","Data":"35b93cecd19f61b8e87c3654028b144e9091df235a1c6c407fdf7223051bc828"} Dec 10 12:21:42 crc kubenswrapper[4689]: I1210 12:21:42.803882 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q" event={"ID":"9878739d-e4ed-446c-82e9-3bf95dee5f97","Type":"ContainerDied","Data":"5bca3b5a69087e2624449bcff3c5f95f7b3ab3661dcaf0eb488e4341ffc6dea4"} Dec 10 12:21:42 crc kubenswrapper[4689]: I1210 12:21:42.803906 4689 scope.go:117] "RemoveContainer" containerID="35b93cecd19f61b8e87c3654028b144e9091df235a1c6c407fdf7223051bc828" Dec 10 12:21:42 crc kubenswrapper[4689]: I1210 12:21:42.805710 4689 generic.go:334] "Generic (PLEG): container finished" podID="f8b91596-d292-4ef0-bb0d-92cb3224c20c" containerID="cb7303a09c051c88587b1a6250dd96a42347a625efba22489a66fb5bec949d7a" exitCode=0 Dec 10 12:21:42 crc kubenswrapper[4689]: I1210 12:21:42.805869 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vj842" event={"ID":"f8b91596-d292-4ef0-bb0d-92cb3224c20c","Type":"ContainerDied","Data":"cb7303a09c051c88587b1a6250dd96a42347a625efba22489a66fb5bec949d7a"} Dec 10 12:21:42 crc kubenswrapper[4689]: I1210 12:21:42.805961 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vj842" event={"ID":"f8b91596-d292-4ef0-bb0d-92cb3224c20c","Type":"ContainerStarted","Data":"99c40ebed7ee8bfabc190ae69fa319a3822ad3970e6a5d0c6cd9c8232a645022"} Dec 10 12:21:42 crc kubenswrapper[4689]: I1210 12:21:42.809051 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqvbb" event={"ID":"4cace366-d916-470c-9cb6-090b7ed04bcb","Type":"ContainerStarted","Data":"4da0853146910c44f836c80492bc5322baa545fb65c3d3f67d859877e3bc9cc4"} Dec 10 12:21:42 crc kubenswrapper[4689]: I1210 12:21:42.857235 4689 scope.go:117] "RemoveContainer" containerID="35b93cecd19f61b8e87c3654028b144e9091df235a1c6c407fdf7223051bc828" Dec 10 12:21:42 crc kubenswrapper[4689]: E1210 12:21:42.857753 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35b93cecd19f61b8e87c3654028b144e9091df235a1c6c407fdf7223051bc828\": container with ID starting with 35b93cecd19f61b8e87c3654028b144e9091df235a1c6c407fdf7223051bc828 not found: ID does not exist" containerID="35b93cecd19f61b8e87c3654028b144e9091df235a1c6c407fdf7223051bc828" Dec 10 12:21:42 crc kubenswrapper[4689]: I1210 12:21:42.857788 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35b93cecd19f61b8e87c3654028b144e9091df235a1c6c407fdf7223051bc828"} err="failed to get container status \"35b93cecd19f61b8e87c3654028b144e9091df235a1c6c407fdf7223051bc828\": rpc error: code = NotFound desc = could not find container \"35b93cecd19f61b8e87c3654028b144e9091df235a1c6c407fdf7223051bc828\": container with ID starting with 35b93cecd19f61b8e87c3654028b144e9091df235a1c6c407fdf7223051bc828 not found: ID does not exist" Dec 10 12:21:42 crc kubenswrapper[4689]: I1210 12:21:42.875117 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q"] Dec 10 12:21:42 crc kubenswrapper[4689]: I1210 12:21:42.878392 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-755cf4cbd7-tmq8q"] Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.465386 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9w8ct"] Dec 10 12:21:43 crc kubenswrapper[4689]: E1210 12:21:43.465702 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9878739d-e4ed-446c-82e9-3bf95dee5f97" containerName="controller-manager" Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.465721 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="9878739d-e4ed-446c-82e9-3bf95dee5f97" containerName="controller-manager" Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.465881 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="9878739d-e4ed-446c-82e9-3bf95dee5f97" containerName="controller-manager" Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.467083 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9w8ct" Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.471440 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.475057 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9w8ct"] Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.605216 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2063bbc8-0509-4956-8dc9-84e8469be8f9-utilities\") pod \"redhat-marketplace-9w8ct\" (UID: \"2063bbc8-0509-4956-8dc9-84e8469be8f9\") " pod="openshift-marketplace/redhat-marketplace-9w8ct" Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.605337 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2063bbc8-0509-4956-8dc9-84e8469be8f9-catalog-content\") pod \"redhat-marketplace-9w8ct\" (UID: \"2063bbc8-0509-4956-8dc9-84e8469be8f9\") " pod="openshift-marketplace/redhat-marketplace-9w8ct" Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.605404 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtbnb\" (UniqueName: \"kubernetes.io/projected/2063bbc8-0509-4956-8dc9-84e8469be8f9-kube-api-access-gtbnb\") pod \"redhat-marketplace-9w8ct\" (UID: \"2063bbc8-0509-4956-8dc9-84e8469be8f9\") " pod="openshift-marketplace/redhat-marketplace-9w8ct" Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.706886 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtbnb\" (UniqueName: \"kubernetes.io/projected/2063bbc8-0509-4956-8dc9-84e8469be8f9-kube-api-access-gtbnb\") pod \"redhat-marketplace-9w8ct\" (UID: \"2063bbc8-0509-4956-8dc9-84e8469be8f9\") " pod="openshift-marketplace/redhat-marketplace-9w8ct" Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.707067 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2063bbc8-0509-4956-8dc9-84e8469be8f9-utilities\") pod \"redhat-marketplace-9w8ct\" (UID: \"2063bbc8-0509-4956-8dc9-84e8469be8f9\") " pod="openshift-marketplace/redhat-marketplace-9w8ct" Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.707178 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2063bbc8-0509-4956-8dc9-84e8469be8f9-catalog-content\") pod \"redhat-marketplace-9w8ct\" (UID: \"2063bbc8-0509-4956-8dc9-84e8469be8f9\") " pod="openshift-marketplace/redhat-marketplace-9w8ct" Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.707524 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2063bbc8-0509-4956-8dc9-84e8469be8f9-utilities\") pod \"redhat-marketplace-9w8ct\" (UID: \"2063bbc8-0509-4956-8dc9-84e8469be8f9\") " pod="openshift-marketplace/redhat-marketplace-9w8ct" Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.707903 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2063bbc8-0509-4956-8dc9-84e8469be8f9-catalog-content\") pod \"redhat-marketplace-9w8ct\" (UID: \"2063bbc8-0509-4956-8dc9-84e8469be8f9\") " pod="openshift-marketplace/redhat-marketplace-9w8ct" Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.738524 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtbnb\" (UniqueName: \"kubernetes.io/projected/2063bbc8-0509-4956-8dc9-84e8469be8f9-kube-api-access-gtbnb\") pod \"redhat-marketplace-9w8ct\" (UID: \"2063bbc8-0509-4956-8dc9-84e8469be8f9\") " pod="openshift-marketplace/redhat-marketplace-9w8ct" Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.759683 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bfc4b4dfc-n4lz8"] Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.760669 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bfc4b4dfc-n4lz8" Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.762819 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.762937 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.762888 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.763008 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.769624 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bfc4b4dfc-n4lz8"] Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.770314 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.771559 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.776995 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.785719 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9w8ct" Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.815727 4689 generic.go:334] "Generic (PLEG): container finished" podID="4cace366-d916-470c-9cb6-090b7ed04bcb" containerID="4da0853146910c44f836c80492bc5322baa545fb65c3d3f67d859877e3bc9cc4" exitCode=0 Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.815960 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqvbb" event={"ID":"4cace366-d916-470c-9cb6-090b7ed04bcb","Type":"ContainerDied","Data":"4da0853146910c44f836c80492bc5322baa545fb65c3d3f67d859877e3bc9cc4"} Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.909247 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec7f0f4f-328d-45f3-857a-8cb2267933a8-proxy-ca-bundles\") pod \"controller-manager-bfc4b4dfc-n4lz8\" (UID: \"ec7f0f4f-328d-45f3-857a-8cb2267933a8\") " pod="openshift-controller-manager/controller-manager-bfc4b4dfc-n4lz8" Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.909300 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec7f0f4f-328d-45f3-857a-8cb2267933a8-client-ca\") pod \"controller-manager-bfc4b4dfc-n4lz8\" (UID: \"ec7f0f4f-328d-45f3-857a-8cb2267933a8\") " pod="openshift-controller-manager/controller-manager-bfc4b4dfc-n4lz8" Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.909319 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec7f0f4f-328d-45f3-857a-8cb2267933a8-serving-cert\") pod \"controller-manager-bfc4b4dfc-n4lz8\" (UID: \"ec7f0f4f-328d-45f3-857a-8cb2267933a8\") " pod="openshift-controller-manager/controller-manager-bfc4b4dfc-n4lz8" Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.909346 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec7f0f4f-328d-45f3-857a-8cb2267933a8-config\") pod \"controller-manager-bfc4b4dfc-n4lz8\" (UID: \"ec7f0f4f-328d-45f3-857a-8cb2267933a8\") " pod="openshift-controller-manager/controller-manager-bfc4b4dfc-n4lz8" Dec 10 12:21:43 crc kubenswrapper[4689]: I1210 12:21:43.909380 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dblhg\" (UniqueName: \"kubernetes.io/projected/ec7f0f4f-328d-45f3-857a-8cb2267933a8-kube-api-access-dblhg\") pod \"controller-manager-bfc4b4dfc-n4lz8\" (UID: \"ec7f0f4f-328d-45f3-857a-8cb2267933a8\") " pod="openshift-controller-manager/controller-manager-bfc4b4dfc-n4lz8" Dec 10 12:21:44 crc kubenswrapper[4689]: I1210 12:21:44.010559 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec7f0f4f-328d-45f3-857a-8cb2267933a8-proxy-ca-bundles\") pod \"controller-manager-bfc4b4dfc-n4lz8\" (UID: \"ec7f0f4f-328d-45f3-857a-8cb2267933a8\") " pod="openshift-controller-manager/controller-manager-bfc4b4dfc-n4lz8" Dec 10 12:21:44 crc kubenswrapper[4689]: I1210 12:21:44.010622 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec7f0f4f-328d-45f3-857a-8cb2267933a8-client-ca\") pod \"controller-manager-bfc4b4dfc-n4lz8\" (UID: \"ec7f0f4f-328d-45f3-857a-8cb2267933a8\") " pod="openshift-controller-manager/controller-manager-bfc4b4dfc-n4lz8" Dec 10 12:21:44 crc kubenswrapper[4689]: I1210 12:21:44.010643 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec7f0f4f-328d-45f3-857a-8cb2267933a8-serving-cert\") pod \"controller-manager-bfc4b4dfc-n4lz8\" (UID: \"ec7f0f4f-328d-45f3-857a-8cb2267933a8\") " pod="openshift-controller-manager/controller-manager-bfc4b4dfc-n4lz8" Dec 10 12:21:44 crc kubenswrapper[4689]: I1210 12:21:44.010680 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec7f0f4f-328d-45f3-857a-8cb2267933a8-config\") pod \"controller-manager-bfc4b4dfc-n4lz8\" (UID: \"ec7f0f4f-328d-45f3-857a-8cb2267933a8\") " pod="openshift-controller-manager/controller-manager-bfc4b4dfc-n4lz8" Dec 10 12:21:44 crc kubenswrapper[4689]: I1210 12:21:44.010729 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dblhg\" (UniqueName: \"kubernetes.io/projected/ec7f0f4f-328d-45f3-857a-8cb2267933a8-kube-api-access-dblhg\") pod \"controller-manager-bfc4b4dfc-n4lz8\" (UID: \"ec7f0f4f-328d-45f3-857a-8cb2267933a8\") " pod="openshift-controller-manager/controller-manager-bfc4b4dfc-n4lz8" Dec 10 12:21:44 crc kubenswrapper[4689]: I1210 12:21:44.011771 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec7f0f4f-328d-45f3-857a-8cb2267933a8-proxy-ca-bundles\") pod \"controller-manager-bfc4b4dfc-n4lz8\" (UID: \"ec7f0f4f-328d-45f3-857a-8cb2267933a8\") " pod="openshift-controller-manager/controller-manager-bfc4b4dfc-n4lz8" Dec 10 12:21:44 crc kubenswrapper[4689]: I1210 12:21:44.011829 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec7f0f4f-328d-45f3-857a-8cb2267933a8-client-ca\") pod \"controller-manager-bfc4b4dfc-n4lz8\" (UID: \"ec7f0f4f-328d-45f3-857a-8cb2267933a8\") " pod="openshift-controller-manager/controller-manager-bfc4b4dfc-n4lz8" Dec 10 12:21:44 crc kubenswrapper[4689]: I1210 12:21:44.011939 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec7f0f4f-328d-45f3-857a-8cb2267933a8-config\") pod \"controller-manager-bfc4b4dfc-n4lz8\" (UID: \"ec7f0f4f-328d-45f3-857a-8cb2267933a8\") " pod="openshift-controller-manager/controller-manager-bfc4b4dfc-n4lz8" Dec 10 12:21:44 crc kubenswrapper[4689]: I1210 12:21:44.018121 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec7f0f4f-328d-45f3-857a-8cb2267933a8-serving-cert\") pod \"controller-manager-bfc4b4dfc-n4lz8\" (UID: \"ec7f0f4f-328d-45f3-857a-8cb2267933a8\") " pod="openshift-controller-manager/controller-manager-bfc4b4dfc-n4lz8" Dec 10 12:21:44 crc kubenswrapper[4689]: I1210 12:21:44.039612 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dblhg\" (UniqueName: \"kubernetes.io/projected/ec7f0f4f-328d-45f3-857a-8cb2267933a8-kube-api-access-dblhg\") pod \"controller-manager-bfc4b4dfc-n4lz8\" (UID: \"ec7f0f4f-328d-45f3-857a-8cb2267933a8\") " pod="openshift-controller-manager/controller-manager-bfc4b4dfc-n4lz8" Dec 10 12:21:44 crc kubenswrapper[4689]: I1210 12:21:44.062354 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8844c"] Dec 10 12:21:44 crc kubenswrapper[4689]: I1210 12:21:44.065569 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8844c" Dec 10 12:21:44 crc kubenswrapper[4689]: I1210 12:21:44.068436 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 10 12:21:44 crc kubenswrapper[4689]: I1210 12:21:44.075454 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8844c"] Dec 10 12:21:44 crc kubenswrapper[4689]: I1210 12:21:44.088993 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bfc4b4dfc-n4lz8" Dec 10 12:21:44 crc kubenswrapper[4689]: I1210 12:21:44.170367 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9w8ct"] Dec 10 12:21:44 crc kubenswrapper[4689]: I1210 12:21:44.212730 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c80fa47-76de-4730-aa6b-85bab40be273-catalog-content\") pod \"redhat-operators-8844c\" (UID: \"6c80fa47-76de-4730-aa6b-85bab40be273\") " pod="openshift-marketplace/redhat-operators-8844c" Dec 10 12:21:44 crc kubenswrapper[4689]: I1210 12:21:44.212806 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzmtq\" (UniqueName: \"kubernetes.io/projected/6c80fa47-76de-4730-aa6b-85bab40be273-kube-api-access-rzmtq\") pod \"redhat-operators-8844c\" (UID: \"6c80fa47-76de-4730-aa6b-85bab40be273\") " pod="openshift-marketplace/redhat-operators-8844c" Dec 10 12:21:44 crc kubenswrapper[4689]: I1210 12:21:44.212848 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c80fa47-76de-4730-aa6b-85bab40be273-utilities\") pod \"redhat-operators-8844c\" (UID: \"6c80fa47-76de-4730-aa6b-85bab40be273\") " pod="openshift-marketplace/redhat-operators-8844c" Dec 10 12:21:44 crc kubenswrapper[4689]: I1210 12:21:44.293185 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bfc4b4dfc-n4lz8"] Dec 10 12:21:44 crc kubenswrapper[4689]: W1210 12:21:44.301747 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec7f0f4f_328d_45f3_857a_8cb2267933a8.slice/crio-b29975f7e1a9751a3fdd7f5cb7dc638717ee606caf6ee1255596e3997757f100 WatchSource:0}: Error finding container b29975f7e1a9751a3fdd7f5cb7dc638717ee606caf6ee1255596e3997757f100: Status 404 returned error can't find the container with id b29975f7e1a9751a3fdd7f5cb7dc638717ee606caf6ee1255596e3997757f100 Dec 10 12:21:44 crc kubenswrapper[4689]: I1210 12:21:44.314348 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c80fa47-76de-4730-aa6b-85bab40be273-catalog-content\") pod \"redhat-operators-8844c\" (UID: \"6c80fa47-76de-4730-aa6b-85bab40be273\") " pod="openshift-marketplace/redhat-operators-8844c" Dec 10 12:21:44 crc kubenswrapper[4689]: I1210 12:21:44.314387 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzmtq\" (UniqueName: \"kubernetes.io/projected/6c80fa47-76de-4730-aa6b-85bab40be273-kube-api-access-rzmtq\") pod \"redhat-operators-8844c\" (UID: \"6c80fa47-76de-4730-aa6b-85bab40be273\") " pod="openshift-marketplace/redhat-operators-8844c" Dec 10 12:21:44 crc kubenswrapper[4689]: I1210 12:21:44.314422 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c80fa47-76de-4730-aa6b-85bab40be273-utilities\") pod \"redhat-operators-8844c\" (UID: \"6c80fa47-76de-4730-aa6b-85bab40be273\") " pod="openshift-marketplace/redhat-operators-8844c" Dec 10 12:21:44 crc kubenswrapper[4689]: I1210 12:21:44.314800 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c80fa47-76de-4730-aa6b-85bab40be273-utilities\") pod \"redhat-operators-8844c\" (UID: \"6c80fa47-76de-4730-aa6b-85bab40be273\") " pod="openshift-marketplace/redhat-operators-8844c" Dec 10 12:21:44 crc kubenswrapper[4689]: I1210 12:21:44.314950 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c80fa47-76de-4730-aa6b-85bab40be273-catalog-content\") pod \"redhat-operators-8844c\" (UID: \"6c80fa47-76de-4730-aa6b-85bab40be273\") " pod="openshift-marketplace/redhat-operators-8844c" Dec 10 12:21:44 crc kubenswrapper[4689]: I1210 12:21:44.337425 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzmtq\" (UniqueName: \"kubernetes.io/projected/6c80fa47-76de-4730-aa6b-85bab40be273-kube-api-access-rzmtq\") pod \"redhat-operators-8844c\" (UID: \"6c80fa47-76de-4730-aa6b-85bab40be273\") " pod="openshift-marketplace/redhat-operators-8844c" Dec 10 12:21:44 crc kubenswrapper[4689]: I1210 12:21:44.382185 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8844c" Dec 10 12:21:44 crc kubenswrapper[4689]: I1210 12:21:44.504861 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9878739d-e4ed-446c-82e9-3bf95dee5f97" path="/var/lib/kubelet/pods/9878739d-e4ed-446c-82e9-3bf95dee5f97/volumes" Dec 10 12:21:44 crc kubenswrapper[4689]: I1210 12:21:44.597230 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8844c"] Dec 10 12:21:44 crc kubenswrapper[4689]: W1210 12:21:44.607940 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c80fa47_76de_4730_aa6b_85bab40be273.slice/crio-5c43a4270f589a237cfee370c0f59e5dc92677dec4a713bb62b89809c175fa0d WatchSource:0}: Error finding container 5c43a4270f589a237cfee370c0f59e5dc92677dec4a713bb62b89809c175fa0d: Status 404 returned error can't find the container with id 5c43a4270f589a237cfee370c0f59e5dc92677dec4a713bb62b89809c175fa0d Dec 10 12:21:44 crc kubenswrapper[4689]: I1210 12:21:44.822379 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bfc4b4dfc-n4lz8" event={"ID":"ec7f0f4f-328d-45f3-857a-8cb2267933a8","Type":"ContainerStarted","Data":"b29975f7e1a9751a3fdd7f5cb7dc638717ee606caf6ee1255596e3997757f100"} Dec 10 12:21:44 crc kubenswrapper[4689]: I1210 12:21:44.824496 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9w8ct" event={"ID":"2063bbc8-0509-4956-8dc9-84e8469be8f9","Type":"ContainerStarted","Data":"c5aa417122001a51fa69062113fdfdeefe2196e56487dae89d23d02a3f520587"} Dec 10 12:21:44 crc kubenswrapper[4689]: I1210 12:21:44.826011 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8844c" event={"ID":"6c80fa47-76de-4730-aa6b-85bab40be273","Type":"ContainerStarted","Data":"5c43a4270f589a237cfee370c0f59e5dc92677dec4a713bb62b89809c175fa0d"} Dec 10 12:21:45 crc kubenswrapper[4689]: I1210 12:21:45.833243 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bfc4b4dfc-n4lz8" event={"ID":"ec7f0f4f-328d-45f3-857a-8cb2267933a8","Type":"ContainerStarted","Data":"215775d07e80986d36c59b5353d2c990202ed7f913baf2f6f9e4dd98218a5b74"} Dec 10 12:21:45 crc kubenswrapper[4689]: I1210 12:21:45.836723 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bfc4b4dfc-n4lz8" Dec 10 12:21:45 crc kubenswrapper[4689]: I1210 12:21:45.837376 4689 generic.go:334] "Generic (PLEG): container finished" podID="f8b91596-d292-4ef0-bb0d-92cb3224c20c" containerID="5ea7ec0bfc8e1fedf72c70c71d215253a8c6a067b5a1f503f6a2c294ad5f071a" exitCode=0 Dec 10 12:21:45 crc kubenswrapper[4689]: I1210 12:21:45.837447 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vj842" event={"ID":"f8b91596-d292-4ef0-bb0d-92cb3224c20c","Type":"ContainerDied","Data":"5ea7ec0bfc8e1fedf72c70c71d215253a8c6a067b5a1f503f6a2c294ad5f071a"} Dec 10 12:21:45 crc kubenswrapper[4689]: I1210 12:21:45.839695 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqvbb" event={"ID":"4cace366-d916-470c-9cb6-090b7ed04bcb","Type":"ContainerStarted","Data":"b638ce37035f162fa5247f0ab3b11723ac902f0eeb4af097c04096fec936801a"} Dec 10 12:21:45 crc kubenswrapper[4689]: I1210 12:21:45.841382 4689 generic.go:334] "Generic (PLEG): container finished" podID="2063bbc8-0509-4956-8dc9-84e8469be8f9" containerID="7aade3522d4e3627502e34585489d2bab94d64ed85297e42443ed9e0ab483899" exitCode=0 Dec 10 12:21:45 crc kubenswrapper[4689]: I1210 12:21:45.841433 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9w8ct" event={"ID":"2063bbc8-0509-4956-8dc9-84e8469be8f9","Type":"ContainerDied","Data":"7aade3522d4e3627502e34585489d2bab94d64ed85297e42443ed9e0ab483899"} Dec 10 12:21:45 crc kubenswrapper[4689]: I1210 12:21:45.843135 4689 generic.go:334] "Generic (PLEG): container finished" podID="6c80fa47-76de-4730-aa6b-85bab40be273" containerID="80f2507a6a36f3ff90114c39765b0f941783f95e9e3a706956e0c686ebd0e49b" exitCode=0 Dec 10 12:21:45 crc kubenswrapper[4689]: I1210 12:21:45.843173 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8844c" event={"ID":"6c80fa47-76de-4730-aa6b-85bab40be273","Type":"ContainerDied","Data":"80f2507a6a36f3ff90114c39765b0f941783f95e9e3a706956e0c686ebd0e49b"} Dec 10 12:21:45 crc kubenswrapper[4689]: I1210 12:21:45.845383 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-bfc4b4dfc-n4lz8" Dec 10 12:21:45 crc kubenswrapper[4689]: I1210 12:21:45.861849 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bfc4b4dfc-n4lz8" podStartSLOduration=3.861826139 podStartE2EDuration="3.861826139s" podCreationTimestamp="2025-12-10 12:21:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:21:45.860488504 +0000 UTC m=+373.648569642" watchObservedRunningTime="2025-12-10 12:21:45.861826139 +0000 UTC m=+373.649907317" Dec 10 12:21:45 crc kubenswrapper[4689]: I1210 12:21:45.950061 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jqvbb" podStartSLOduration=2.727148399 podStartE2EDuration="5.950041543s" podCreationTimestamp="2025-12-10 12:21:40 +0000 UTC" firstStartedPulling="2025-12-10 12:21:41.792413102 +0000 UTC m=+369.580494240" lastFinishedPulling="2025-12-10 12:21:45.015306246 +0000 UTC m=+372.803387384" observedRunningTime="2025-12-10 12:21:45.913069536 +0000 UTC m=+373.701150684" watchObservedRunningTime="2025-12-10 12:21:45.950041543 +0000 UTC m=+373.738122681" Dec 10 12:21:46 crc kubenswrapper[4689]: I1210 12:21:46.851737 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9w8ct" event={"ID":"2063bbc8-0509-4956-8dc9-84e8469be8f9","Type":"ContainerStarted","Data":"82ec5a278be3643038759eb75443e5c6ed9a2766e4ce5f5e94dbdcf8a4deb5b0"} Dec 10 12:21:46 crc kubenswrapper[4689]: I1210 12:21:46.853924 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8844c" event={"ID":"6c80fa47-76de-4730-aa6b-85bab40be273","Type":"ContainerStarted","Data":"7a7b7e6d9689e96a7537ce818f57bbe99235afddea8d737dc51d8cacebdff22b"} Dec 10 12:21:46 crc kubenswrapper[4689]: I1210 12:21:46.855632 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vj842" event={"ID":"f8b91596-d292-4ef0-bb0d-92cb3224c20c","Type":"ContainerStarted","Data":"21a0fdda967da9a3cf15082efdbad262b13b846d896afd19262c435ca7c05f48"} Dec 10 12:21:46 crc kubenswrapper[4689]: I1210 12:21:46.889158 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vj842" podStartSLOduration=2.191867144 podStartE2EDuration="5.889141934s" podCreationTimestamp="2025-12-10 12:21:41 +0000 UTC" firstStartedPulling="2025-12-10 12:21:42.808744722 +0000 UTC m=+370.596825860" lastFinishedPulling="2025-12-10 12:21:46.506019512 +0000 UTC m=+374.294100650" observedRunningTime="2025-12-10 12:21:46.887267395 +0000 UTC m=+374.675348533" watchObservedRunningTime="2025-12-10 12:21:46.889141934 +0000 UTC m=+374.677223072" Dec 10 12:21:47 crc kubenswrapper[4689]: I1210 12:21:47.862593 4689 generic.go:334] "Generic (PLEG): container finished" podID="2063bbc8-0509-4956-8dc9-84e8469be8f9" containerID="82ec5a278be3643038759eb75443e5c6ed9a2766e4ce5f5e94dbdcf8a4deb5b0" exitCode=0 Dec 10 12:21:47 crc kubenswrapper[4689]: I1210 12:21:47.862963 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9w8ct" event={"ID":"2063bbc8-0509-4956-8dc9-84e8469be8f9","Type":"ContainerDied","Data":"82ec5a278be3643038759eb75443e5c6ed9a2766e4ce5f5e94dbdcf8a4deb5b0"} Dec 10 12:21:47 crc kubenswrapper[4689]: I1210 12:21:47.866309 4689 generic.go:334] "Generic (PLEG): container finished" podID="6c80fa47-76de-4730-aa6b-85bab40be273" containerID="7a7b7e6d9689e96a7537ce818f57bbe99235afddea8d737dc51d8cacebdff22b" exitCode=0 Dec 10 12:21:47 crc kubenswrapper[4689]: I1210 12:21:47.867103 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8844c" event={"ID":"6c80fa47-76de-4730-aa6b-85bab40be273","Type":"ContainerDied","Data":"7a7b7e6d9689e96a7537ce818f57bbe99235afddea8d737dc51d8cacebdff22b"} Dec 10 12:21:49 crc kubenswrapper[4689]: I1210 12:21:49.880679 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9w8ct" event={"ID":"2063bbc8-0509-4956-8dc9-84e8469be8f9","Type":"ContainerStarted","Data":"522f204e4f470aef3a61cb14c95007a400f406ddbef1cb1ee3f36fbee0155fb1"} Dec 10 12:21:49 crc kubenswrapper[4689]: I1210 12:21:49.883491 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8844c" event={"ID":"6c80fa47-76de-4730-aa6b-85bab40be273","Type":"ContainerStarted","Data":"4b411cdb94f7621277171d34f513ce6c081acd5f891bc041ed5cb3127b61cee2"} Dec 10 12:21:49 crc kubenswrapper[4689]: I1210 12:21:49.904294 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9w8ct" podStartSLOduration=3.734763776 podStartE2EDuration="6.904275467s" podCreationTimestamp="2025-12-10 12:21:43 +0000 UTC" firstStartedPulling="2025-12-10 12:21:45.84294297 +0000 UTC m=+373.631024118" lastFinishedPulling="2025-12-10 12:21:49.012454681 +0000 UTC m=+376.800535809" observedRunningTime="2025-12-10 12:21:49.901509966 +0000 UTC m=+377.689591114" watchObservedRunningTime="2025-12-10 12:21:49.904275467 +0000 UTC m=+377.692356605" Dec 10 12:21:49 crc kubenswrapper[4689]: I1210 12:21:49.926175 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8844c" podStartSLOduration=2.870897641 podStartE2EDuration="5.926152524s" podCreationTimestamp="2025-12-10 12:21:44 +0000 UTC" firstStartedPulling="2025-12-10 12:21:45.844697575 +0000 UTC m=+373.632778713" lastFinishedPulling="2025-12-10 12:21:48.899952458 +0000 UTC m=+376.688033596" observedRunningTime="2025-12-10 12:21:49.921681239 +0000 UTC m=+377.709762377" watchObservedRunningTime="2025-12-10 12:21:49.926152524 +0000 UTC m=+377.714233692" Dec 10 12:21:50 crc kubenswrapper[4689]: I1210 12:21:50.994780 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jqvbb" Dec 10 12:21:50 crc kubenswrapper[4689]: I1210 12:21:50.994821 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jqvbb" Dec 10 12:21:51 crc kubenswrapper[4689]: I1210 12:21:51.052437 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jqvbb" Dec 10 12:21:51 crc kubenswrapper[4689]: I1210 12:21:51.930687 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jqvbb" Dec 10 12:21:51 crc kubenswrapper[4689]: I1210 12:21:51.973744 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vj842" Dec 10 12:21:51 crc kubenswrapper[4689]: I1210 12:21:51.973828 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vj842" Dec 10 12:21:52 crc kubenswrapper[4689]: I1210 12:21:52.016789 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vj842" Dec 10 12:21:52 crc kubenswrapper[4689]: I1210 12:21:52.949162 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vj842" Dec 10 12:21:53 crc kubenswrapper[4689]: I1210 12:21:53.786142 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9w8ct" Dec 10 12:21:53 crc kubenswrapper[4689]: I1210 12:21:53.786189 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9w8ct" Dec 10 12:21:53 crc kubenswrapper[4689]: I1210 12:21:53.829230 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9w8ct" Dec 10 12:21:54 crc kubenswrapper[4689]: I1210 12:21:54.383368 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8844c" Dec 10 12:21:54 crc kubenswrapper[4689]: I1210 12:21:54.383420 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8844c" Dec 10 12:21:55 crc kubenswrapper[4689]: I1210 12:21:55.423706 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8844c" podUID="6c80fa47-76de-4730-aa6b-85bab40be273" containerName="registry-server" probeResult="failure" output=< Dec 10 12:21:55 crc kubenswrapper[4689]: timeout: failed to connect service ":50051" within 1s Dec 10 12:21:55 crc kubenswrapper[4689]: > Dec 10 12:21:58 crc kubenswrapper[4689]: I1210 12:21:58.857724 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" podUID="dab0bc5b-ec14-46fa-9304-ab1b3b556eeb" containerName="registry" containerID="cri-o://4ddc971fc29a4fcac417581f06e1c93c14abd78908b22ecb1d9eae3ff9d0478b" gracePeriod=30 Dec 10 12:22:01 crc kubenswrapper[4689]: I1210 12:22:01.947997 4689 generic.go:334] "Generic (PLEG): container finished" podID="dab0bc5b-ec14-46fa-9304-ab1b3b556eeb" containerID="4ddc971fc29a4fcac417581f06e1c93c14abd78908b22ecb1d9eae3ff9d0478b" exitCode=0 Dec 10 12:22:01 crc kubenswrapper[4689]: I1210 12:22:01.948084 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" event={"ID":"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb","Type":"ContainerDied","Data":"4ddc971fc29a4fcac417581f06e1c93c14abd78908b22ecb1d9eae3ff9d0478b"} Dec 10 12:22:02 crc kubenswrapper[4689]: I1210 12:22:02.191993 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:22:02 crc kubenswrapper[4689]: I1210 12:22:02.350761 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-installation-pull-secrets\") pod \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " Dec 10 12:22:02 crc kubenswrapper[4689]: I1210 12:22:02.350870 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-registry-certificates\") pod \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " Dec 10 12:22:02 crc kubenswrapper[4689]: I1210 12:22:02.350911 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-registry-tls\") pod \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " Dec 10 12:22:02 crc kubenswrapper[4689]: I1210 12:22:02.350943 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-trusted-ca\") pod \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " Dec 10 12:22:02 crc kubenswrapper[4689]: I1210 12:22:02.350997 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-ca-trust-extracted\") pod \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " Dec 10 12:22:02 crc kubenswrapper[4689]: I1210 12:22:02.351028 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxtnw\" (UniqueName: \"kubernetes.io/projected/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-kube-api-access-pxtnw\") pod \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " Dec 10 12:22:02 crc kubenswrapper[4689]: I1210 12:22:02.351206 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " Dec 10 12:22:02 crc kubenswrapper[4689]: I1210 12:22:02.351573 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-bound-sa-token\") pod \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\" (UID: \"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb\") " Dec 10 12:22:02 crc kubenswrapper[4689]: I1210 12:22:02.352036 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:22:02 crc kubenswrapper[4689]: I1210 12:22:02.352768 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:22:02 crc kubenswrapper[4689]: I1210 12:22:02.356581 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:22:02 crc kubenswrapper[4689]: I1210 12:22:02.357145 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-kube-api-access-pxtnw" (OuterVolumeSpecName: "kube-api-access-pxtnw") pod "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb"). InnerVolumeSpecName "kube-api-access-pxtnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:22:02 crc kubenswrapper[4689]: I1210 12:22:02.357489 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:22:02 crc kubenswrapper[4689]: I1210 12:22:02.358803 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:22:02 crc kubenswrapper[4689]: I1210 12:22:02.363229 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 10 12:22:02 crc kubenswrapper[4689]: I1210 12:22:02.376754 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb" (UID: "dab0bc5b-ec14-46fa-9304-ab1b3b556eeb"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:22:02 crc kubenswrapper[4689]: I1210 12:22:02.453009 4689 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 10 12:22:02 crc kubenswrapper[4689]: I1210 12:22:02.453048 4689 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 10 12:22:02 crc kubenswrapper[4689]: I1210 12:22:02.453059 4689 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 12:22:02 crc kubenswrapper[4689]: I1210 12:22:02.453067 4689 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 10 12:22:02 crc kubenswrapper[4689]: I1210 12:22:02.453078 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxtnw\" (UniqueName: \"kubernetes.io/projected/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-kube-api-access-pxtnw\") on node \"crc\" DevicePath \"\"" Dec 10 12:22:02 crc kubenswrapper[4689]: I1210 12:22:02.453088 4689 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 10 12:22:02 crc kubenswrapper[4689]: I1210 12:22:02.453096 4689 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 10 12:22:02 crc kubenswrapper[4689]: I1210 12:22:02.956742 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" event={"ID":"dab0bc5b-ec14-46fa-9304-ab1b3b556eeb","Type":"ContainerDied","Data":"713cc4566af8a5d40c449ebfea8b7f99a6a9b661c21b5f9f4f2d120e8ad45e33"} Dec 10 12:22:02 crc kubenswrapper[4689]: I1210 12:22:02.956825 4689 scope.go:117] "RemoveContainer" containerID="4ddc971fc29a4fcac417581f06e1c93c14abd78908b22ecb1d9eae3ff9d0478b" Dec 10 12:22:02 crc kubenswrapper[4689]: I1210 12:22:02.957058 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jw2qz" Dec 10 12:22:02 crc kubenswrapper[4689]: I1210 12:22:02.983737 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jw2qz"] Dec 10 12:22:02 crc kubenswrapper[4689]: I1210 12:22:02.990795 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jw2qz"] Dec 10 12:22:03 crc kubenswrapper[4689]: I1210 12:22:03.855188 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9w8ct" Dec 10 12:22:04 crc kubenswrapper[4689]: I1210 12:22:04.436936 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8844c" Dec 10 12:22:04 crc kubenswrapper[4689]: I1210 12:22:04.490546 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8844c" Dec 10 12:22:04 crc kubenswrapper[4689]: I1210 12:22:04.508365 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dab0bc5b-ec14-46fa-9304-ab1b3b556eeb" path="/var/lib/kubelet/pods/dab0bc5b-ec14-46fa-9304-ab1b3b556eeb/volumes" Dec 10 12:22:07 crc kubenswrapper[4689]: I1210 12:22:07.167183 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 12:22:07 crc kubenswrapper[4689]: I1210 12:22:07.167519 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 12:22:37 crc kubenswrapper[4689]: I1210 12:22:37.167490 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 12:22:37 crc kubenswrapper[4689]: I1210 12:22:37.168308 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 12:22:37 crc kubenswrapper[4689]: I1210 12:22:37.168389 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" Dec 10 12:22:37 crc kubenswrapper[4689]: I1210 12:22:37.169345 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8aa9c40c0f7115c60e594bf12aea8548002c497e01622aaaefef974497c74a95"} pod="openshift-machine-config-operator/machine-config-daemon-db6zk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 12:22:37 crc kubenswrapper[4689]: I1210 12:22:37.169454 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" containerID="cri-o://8aa9c40c0f7115c60e594bf12aea8548002c497e01622aaaefef974497c74a95" gracePeriod=600 Dec 10 12:22:38 crc kubenswrapper[4689]: I1210 12:22:38.201081 4689 generic.go:334] "Generic (PLEG): container finished" podID="a41ebdcd-910f-4669-992d-296e1a92162b" containerID="8aa9c40c0f7115c60e594bf12aea8548002c497e01622aaaefef974497c74a95" exitCode=0 Dec 10 12:22:38 crc kubenswrapper[4689]: I1210 12:22:38.201173 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" event={"ID":"a41ebdcd-910f-4669-992d-296e1a92162b","Type":"ContainerDied","Data":"8aa9c40c0f7115c60e594bf12aea8548002c497e01622aaaefef974497c74a95"} Dec 10 12:22:38 crc kubenswrapper[4689]: I1210 12:22:38.201528 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" event={"ID":"a41ebdcd-910f-4669-992d-296e1a92162b","Type":"ContainerStarted","Data":"b330b5eb64e249789ebd7ac5911e2ea7484fa0071beacdd7f1062b6742924ad6"} Dec 10 12:22:38 crc kubenswrapper[4689]: I1210 12:22:38.201560 4689 scope.go:117] "RemoveContainer" containerID="5ffc5e2e1cb81aa530273fc5c44b4eb25b760dfffa203e312280e7a1c0150505" Dec 10 12:24:37 crc kubenswrapper[4689]: I1210 12:24:37.167293 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 12:24:37 crc kubenswrapper[4689]: I1210 12:24:37.168058 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 12:25:07 crc kubenswrapper[4689]: I1210 12:25:07.167171 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 12:25:07 crc kubenswrapper[4689]: I1210 12:25:07.167810 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 12:25:37 crc kubenswrapper[4689]: I1210 12:25:37.167329 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 12:25:37 crc kubenswrapper[4689]: I1210 12:25:37.168319 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 12:25:37 crc kubenswrapper[4689]: I1210 12:25:37.168397 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" Dec 10 12:25:37 crc kubenswrapper[4689]: I1210 12:25:37.169306 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b330b5eb64e249789ebd7ac5911e2ea7484fa0071beacdd7f1062b6742924ad6"} pod="openshift-machine-config-operator/machine-config-daemon-db6zk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 12:25:37 crc kubenswrapper[4689]: I1210 12:25:37.169377 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" containerID="cri-o://b330b5eb64e249789ebd7ac5911e2ea7484fa0071beacdd7f1062b6742924ad6" gracePeriod=600 Dec 10 12:25:37 crc kubenswrapper[4689]: I1210 12:25:37.458473 4689 generic.go:334] "Generic (PLEG): container finished" podID="a41ebdcd-910f-4669-992d-296e1a92162b" containerID="b330b5eb64e249789ebd7ac5911e2ea7484fa0071beacdd7f1062b6742924ad6" exitCode=0 Dec 10 12:25:37 crc kubenswrapper[4689]: I1210 12:25:37.458598 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" event={"ID":"a41ebdcd-910f-4669-992d-296e1a92162b","Type":"ContainerDied","Data":"b330b5eb64e249789ebd7ac5911e2ea7484fa0071beacdd7f1062b6742924ad6"} Dec 10 12:25:37 crc kubenswrapper[4689]: I1210 12:25:37.459007 4689 scope.go:117] "RemoveContainer" containerID="8aa9c40c0f7115c60e594bf12aea8548002c497e01622aaaefef974497c74a95" Dec 10 12:25:38 crc kubenswrapper[4689]: I1210 12:25:38.470663 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" event={"ID":"a41ebdcd-910f-4669-992d-296e1a92162b","Type":"ContainerStarted","Data":"1d20271d773850f4d5b2fbccca8a9391a64d881b36edb8636961a3fdb4367ab8"} Dec 10 12:27:37 crc kubenswrapper[4689]: I1210 12:27:37.166603 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 12:27:37 crc kubenswrapper[4689]: I1210 12:27:37.167252 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 12:27:47 crc kubenswrapper[4689]: I1210 12:27:47.279603 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-bwjp8"] Dec 10 12:27:47 crc kubenswrapper[4689]: E1210 12:27:47.280250 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dab0bc5b-ec14-46fa-9304-ab1b3b556eeb" containerName="registry" Dec 10 12:27:47 crc kubenswrapper[4689]: I1210 12:27:47.280262 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="dab0bc5b-ec14-46fa-9304-ab1b3b556eeb" containerName="registry" Dec 10 12:27:47 crc kubenswrapper[4689]: I1210 12:27:47.280355 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="dab0bc5b-ec14-46fa-9304-ab1b3b556eeb" containerName="registry" Dec 10 12:27:47 crc kubenswrapper[4689]: I1210 12:27:47.280695 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-bwjp8" Dec 10 12:27:47 crc kubenswrapper[4689]: I1210 12:27:47.282138 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 10 12:27:47 crc kubenswrapper[4689]: I1210 12:27:47.282569 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 10 12:27:47 crc kubenswrapper[4689]: I1210 12:27:47.282573 4689 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-rcdtn" Dec 10 12:27:47 crc kubenswrapper[4689]: I1210 12:27:47.294668 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-w9qvl"] Dec 10 12:27:47 crc kubenswrapper[4689]: I1210 12:27:47.295245 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-w9qvl" Dec 10 12:27:47 crc kubenswrapper[4689]: I1210 12:27:47.297100 4689 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-24bzq" Dec 10 12:27:47 crc kubenswrapper[4689]: I1210 12:27:47.321944 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-w9qvl"] Dec 10 12:27:47 crc kubenswrapper[4689]: I1210 12:27:47.328509 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-v7z9w"] Dec 10 12:27:47 crc kubenswrapper[4689]: I1210 12:27:47.329337 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-v7z9w" Dec 10 12:27:47 crc kubenswrapper[4689]: I1210 12:27:47.333319 4689 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-shrsd" Dec 10 12:27:47 crc kubenswrapper[4689]: I1210 12:27:47.337148 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-bwjp8"] Dec 10 12:27:47 crc kubenswrapper[4689]: I1210 12:27:47.345407 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-v7z9w"] Dec 10 12:27:47 crc kubenswrapper[4689]: I1210 12:27:47.394815 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlbj8\" (UniqueName: \"kubernetes.io/projected/66bcc49d-7116-4b58-b150-65dc5f00678a-kube-api-access-wlbj8\") pod \"cert-manager-webhook-5655c58dd6-v7z9w\" (UID: \"66bcc49d-7116-4b58-b150-65dc5f00678a\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-v7z9w" Dec 10 12:27:47 crc kubenswrapper[4689]: I1210 12:27:47.394914 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlpvt\" (UniqueName: \"kubernetes.io/projected/9eb2376e-1c8b-4a82-b570-f2f7d8faa957-kube-api-access-dlpvt\") pod \"cert-manager-cainjector-7f985d654d-bwjp8\" (UID: \"9eb2376e-1c8b-4a82-b570-f2f7d8faa957\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-bwjp8" Dec 10 12:27:47 crc kubenswrapper[4689]: I1210 12:27:47.394960 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4467q\" (UniqueName: \"kubernetes.io/projected/d40a3a14-dc06-4eb8-91fb-3d624202b9bb-kube-api-access-4467q\") pod \"cert-manager-5b446d88c5-w9qvl\" (UID: \"d40a3a14-dc06-4eb8-91fb-3d624202b9bb\") " pod="cert-manager/cert-manager-5b446d88c5-w9qvl" Dec 10 12:27:47 crc kubenswrapper[4689]: I1210 12:27:47.496266 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlbj8\" (UniqueName: \"kubernetes.io/projected/66bcc49d-7116-4b58-b150-65dc5f00678a-kube-api-access-wlbj8\") pod \"cert-manager-webhook-5655c58dd6-v7z9w\" (UID: \"66bcc49d-7116-4b58-b150-65dc5f00678a\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-v7z9w" Dec 10 12:27:47 crc kubenswrapper[4689]: I1210 12:27:47.496333 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlpvt\" (UniqueName: \"kubernetes.io/projected/9eb2376e-1c8b-4a82-b570-f2f7d8faa957-kube-api-access-dlpvt\") pod \"cert-manager-cainjector-7f985d654d-bwjp8\" (UID: \"9eb2376e-1c8b-4a82-b570-f2f7d8faa957\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-bwjp8" Dec 10 12:27:47 crc kubenswrapper[4689]: I1210 12:27:47.496377 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4467q\" (UniqueName: \"kubernetes.io/projected/d40a3a14-dc06-4eb8-91fb-3d624202b9bb-kube-api-access-4467q\") pod \"cert-manager-5b446d88c5-w9qvl\" (UID: \"d40a3a14-dc06-4eb8-91fb-3d624202b9bb\") " pod="cert-manager/cert-manager-5b446d88c5-w9qvl" Dec 10 12:27:47 crc kubenswrapper[4689]: I1210 12:27:47.518878 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4467q\" (UniqueName: \"kubernetes.io/projected/d40a3a14-dc06-4eb8-91fb-3d624202b9bb-kube-api-access-4467q\") pod \"cert-manager-5b446d88c5-w9qvl\" (UID: \"d40a3a14-dc06-4eb8-91fb-3d624202b9bb\") " pod="cert-manager/cert-manager-5b446d88c5-w9qvl" Dec 10 12:27:47 crc kubenswrapper[4689]: I1210 12:27:47.519489 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlbj8\" (UniqueName: \"kubernetes.io/projected/66bcc49d-7116-4b58-b150-65dc5f00678a-kube-api-access-wlbj8\") pod \"cert-manager-webhook-5655c58dd6-v7z9w\" (UID: \"66bcc49d-7116-4b58-b150-65dc5f00678a\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-v7z9w" Dec 10 12:27:47 crc kubenswrapper[4689]: I1210 12:27:47.526574 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlpvt\" (UniqueName: \"kubernetes.io/projected/9eb2376e-1c8b-4a82-b570-f2f7d8faa957-kube-api-access-dlpvt\") pod \"cert-manager-cainjector-7f985d654d-bwjp8\" (UID: \"9eb2376e-1c8b-4a82-b570-f2f7d8faa957\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-bwjp8" Dec 10 12:27:47 crc kubenswrapper[4689]: I1210 12:27:47.597276 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-bwjp8" Dec 10 12:27:47 crc kubenswrapper[4689]: I1210 12:27:47.608168 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-w9qvl" Dec 10 12:27:47 crc kubenswrapper[4689]: I1210 12:27:47.644232 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-v7z9w" Dec 10 12:27:47 crc kubenswrapper[4689]: I1210 12:27:47.885551 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-w9qvl"] Dec 10 12:27:47 crc kubenswrapper[4689]: I1210 12:27:47.895623 4689 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 12:27:47 crc kubenswrapper[4689]: I1210 12:27:47.922075 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-v7z9w"] Dec 10 12:27:47 crc kubenswrapper[4689]: W1210 12:27:47.926925 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66bcc49d_7116_4b58_b150_65dc5f00678a.slice/crio-d908cd3450cabdf667b34c9d8d2abd8bfad518636bf993c465cd5b0ac60eb30f WatchSource:0}: Error finding container d908cd3450cabdf667b34c9d8d2abd8bfad518636bf993c465cd5b0ac60eb30f: Status 404 returned error can't find the container with id d908cd3450cabdf667b34c9d8d2abd8bfad518636bf993c465cd5b0ac60eb30f Dec 10 12:27:48 crc kubenswrapper[4689]: I1210 12:27:48.025252 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-bwjp8"] Dec 10 12:27:48 crc kubenswrapper[4689]: W1210 12:27:48.027628 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9eb2376e_1c8b_4a82_b570_f2f7d8faa957.slice/crio-59a1e0685f3d9e3f8822f4f309afeddb2440de3f86254687f1abf15c8b84e53e WatchSource:0}: Error finding container 59a1e0685f3d9e3f8822f4f309afeddb2440de3f86254687f1abf15c8b84e53e: Status 404 returned error can't find the container with id 59a1e0685f3d9e3f8822f4f309afeddb2440de3f86254687f1abf15c8b84e53e Dec 10 12:27:48 crc kubenswrapper[4689]: I1210 12:27:48.371104 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-bwjp8" event={"ID":"9eb2376e-1c8b-4a82-b570-f2f7d8faa957","Type":"ContainerStarted","Data":"59a1e0685f3d9e3f8822f4f309afeddb2440de3f86254687f1abf15c8b84e53e"} Dec 10 12:27:48 crc kubenswrapper[4689]: I1210 12:27:48.373595 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-v7z9w" event={"ID":"66bcc49d-7116-4b58-b150-65dc5f00678a","Type":"ContainerStarted","Data":"d908cd3450cabdf667b34c9d8d2abd8bfad518636bf993c465cd5b0ac60eb30f"} Dec 10 12:27:48 crc kubenswrapper[4689]: I1210 12:27:48.375086 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-w9qvl" event={"ID":"d40a3a14-dc06-4eb8-91fb-3d624202b9bb","Type":"ContainerStarted","Data":"1964efb5cb92792fef93d2844f9566db265cc5e4b0d16acb09027638b33fe4b9"} Dec 10 12:27:52 crc kubenswrapper[4689]: I1210 12:27:52.408681 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-v7z9w" event={"ID":"66bcc49d-7116-4b58-b150-65dc5f00678a","Type":"ContainerStarted","Data":"302774eeb0634a804932a3bd1429539edbf732b179fa11ddebda6b5d589e3122"} Dec 10 12:27:52 crc kubenswrapper[4689]: I1210 12:27:52.409042 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-v7z9w" Dec 10 12:27:52 crc kubenswrapper[4689]: I1210 12:27:52.413748 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-w9qvl" event={"ID":"d40a3a14-dc06-4eb8-91fb-3d624202b9bb","Type":"ContainerStarted","Data":"ffe5847eef40a10a56ddf471932ef846b90fc23128a45bfee39e8b2d48881613"} Dec 10 12:27:52 crc kubenswrapper[4689]: I1210 12:27:52.416887 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-bwjp8" event={"ID":"9eb2376e-1c8b-4a82-b570-f2f7d8faa957","Type":"ContainerStarted","Data":"e684fa23b03a760100b0278ff7e77d8a18e3b9b9fc8918c251f8d164c6917422"} Dec 10 12:27:52 crc kubenswrapper[4689]: I1210 12:27:52.437358 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-v7z9w" podStartSLOduration=1.708097829 podStartE2EDuration="5.437326544s" podCreationTimestamp="2025-12-10 12:27:47 +0000 UTC" firstStartedPulling="2025-12-10 12:27:47.929167548 +0000 UTC m=+735.717248686" lastFinishedPulling="2025-12-10 12:27:51.658396213 +0000 UTC m=+739.446477401" observedRunningTime="2025-12-10 12:27:52.428199496 +0000 UTC m=+740.216280674" watchObservedRunningTime="2025-12-10 12:27:52.437326544 +0000 UTC m=+740.225407722" Dec 10 12:27:52 crc kubenswrapper[4689]: I1210 12:27:52.456576 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-w9qvl" podStartSLOduration=1.6783230040000001 podStartE2EDuration="5.456538355s" podCreationTimestamp="2025-12-10 12:27:47 +0000 UTC" firstStartedPulling="2025-12-10 12:27:47.895437894 +0000 UTC m=+735.683519032" lastFinishedPulling="2025-12-10 12:27:51.673653245 +0000 UTC m=+739.461734383" observedRunningTime="2025-12-10 12:27:52.450174165 +0000 UTC m=+740.238255333" watchObservedRunningTime="2025-12-10 12:27:52.456538355 +0000 UTC m=+740.244619523" Dec 10 12:27:52 crc kubenswrapper[4689]: I1210 12:27:52.503442 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-bwjp8" podStartSLOduration=1.8772866320000001 podStartE2EDuration="5.503410167s" podCreationTimestamp="2025-12-10 12:27:47 +0000 UTC" firstStartedPulling="2025-12-10 12:27:48.02995827 +0000 UTC m=+735.818039408" lastFinishedPulling="2025-12-10 12:27:51.656081805 +0000 UTC m=+739.444162943" observedRunningTime="2025-12-10 12:27:52.502856643 +0000 UTC m=+740.290937811" watchObservedRunningTime="2025-12-10 12:27:52.503410167 +0000 UTC m=+740.291491355" Dec 10 12:27:57 crc kubenswrapper[4689]: I1210 12:27:57.648202 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-v7z9w" Dec 10 12:27:57 crc kubenswrapper[4689]: I1210 12:27:57.734162 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-d5s24"] Dec 10 12:27:57 crc kubenswrapper[4689]: I1210 12:27:57.734494 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="ovn-controller" containerID="cri-o://17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723" gracePeriod=30 Dec 10 12:27:57 crc kubenswrapper[4689]: I1210 12:27:57.734608 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="ovn-acl-logging" containerID="cri-o://eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9" gracePeriod=30 Dec 10 12:27:57 crc kubenswrapper[4689]: I1210 12:27:57.734580 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21" gracePeriod=30 Dec 10 12:27:57 crc kubenswrapper[4689]: I1210 12:27:57.734600 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="nbdb" containerID="cri-o://6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301" gracePeriod=30 Dec 10 12:27:57 crc kubenswrapper[4689]: I1210 12:27:57.734766 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="northd" containerID="cri-o://4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07" gracePeriod=30 Dec 10 12:27:57 crc kubenswrapper[4689]: I1210 12:27:57.734825 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="sbdb" containerID="cri-o://e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c" gracePeriod=30 Dec 10 12:27:57 crc kubenswrapper[4689]: I1210 12:27:57.734612 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="kube-rbac-proxy-node" containerID="cri-o://f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a" gracePeriod=30 Dec 10 12:27:57 crc kubenswrapper[4689]: I1210 12:27:57.785601 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="ovnkube-controller" containerID="cri-o://6c431d5019498721f833fd36391e4a59baa27e8b5605feec6ae956436a5391a2" gracePeriod=30 Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.026156 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d5s24_b434fe8e-c4c2-4979-a9b6-8561523c2d9d/ovnkube-controller/3.log" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.028961 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d5s24_b434fe8e-c4c2-4979-a9b6-8561523c2d9d/ovn-acl-logging/0.log" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.029751 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d5s24_b434fe8e-c4c2-4979-a9b6-8561523c2d9d/ovn-controller/0.log" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.030515 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.099750 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5cmb5"] Dec 10 12:27:58 crc kubenswrapper[4689]: E1210 12:27:58.100092 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="kube-rbac-proxy-node" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.100124 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="kube-rbac-proxy-node" Dec 10 12:27:58 crc kubenswrapper[4689]: E1210 12:27:58.100144 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="ovnkube-controller" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.100156 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="ovnkube-controller" Dec 10 12:27:58 crc kubenswrapper[4689]: E1210 12:27:58.100171 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="nbdb" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.100186 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="nbdb" Dec 10 12:27:58 crc kubenswrapper[4689]: E1210 12:27:58.100208 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="ovnkube-controller" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.100221 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="ovnkube-controller" Dec 10 12:27:58 crc kubenswrapper[4689]: E1210 12:27:58.100240 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="ovn-controller" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.100251 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="ovn-controller" Dec 10 12:27:58 crc kubenswrapper[4689]: E1210 12:27:58.100267 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="ovnkube-controller" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.100279 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="ovnkube-controller" Dec 10 12:27:58 crc kubenswrapper[4689]: E1210 12:27:58.100291 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="northd" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.100302 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="northd" Dec 10 12:27:58 crc kubenswrapper[4689]: E1210 12:27:58.100317 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="sbdb" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.100328 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="sbdb" Dec 10 12:27:58 crc kubenswrapper[4689]: E1210 12:27:58.100349 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="ovn-acl-logging" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.100360 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="ovn-acl-logging" Dec 10 12:27:58 crc kubenswrapper[4689]: E1210 12:27:58.100376 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="kube-rbac-proxy-ovn-metrics" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.100389 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="kube-rbac-proxy-ovn-metrics" Dec 10 12:27:58 crc kubenswrapper[4689]: E1210 12:27:58.100403 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="kubecfg-setup" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.100414 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="kubecfg-setup" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.100576 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="kube-rbac-proxy-ovn-metrics" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.100599 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="ovn-controller" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.100619 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="nbdb" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.100632 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="ovnkube-controller" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.100648 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="ovnkube-controller" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.100659 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="northd" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.100681 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="sbdb" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.100698 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="ovn-acl-logging" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.100714 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="ovnkube-controller" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.100729 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="kube-rbac-proxy-node" Dec 10 12:27:58 crc kubenswrapper[4689]: E1210 12:27:58.100933 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="ovnkube-controller" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.100950 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="ovnkube-controller" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.101172 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="ovnkube-controller" Dec 10 12:27:58 crc kubenswrapper[4689]: E1210 12:27:58.101378 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="ovnkube-controller" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.101398 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="ovnkube-controller" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.101582 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerName="ovnkube-controller" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.104379 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.177477 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-ovnkube-config\") pod \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.177527 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-log-socket\") pod \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.177565 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-run-ovn-kubernetes\") pod \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.177601 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-node-log\") pod \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.177624 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-ovn-node-metrics-cert\") pod \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.177642 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-run-systemd\") pod \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.177671 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-cni-bin\") pod \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.177695 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-run-openvswitch\") pod \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.177715 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-env-overrides\") pod \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.177740 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-kubelet\") pod \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.177762 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-run-netns\") pod \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.177757 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "b434fe8e-c4c2-4979-a9b6-8561523c2d9d" (UID: "b434fe8e-c4c2-4979-a9b6-8561523c2d9d"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.177792 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.177842 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "b434fe8e-c4c2-4979-a9b6-8561523c2d9d" (UID: "b434fe8e-c4c2-4979-a9b6-8561523c2d9d"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.177886 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-node-log" (OuterVolumeSpecName: "node-log") pod "b434fe8e-c4c2-4979-a9b6-8561523c2d9d" (UID: "b434fe8e-c4c2-4979-a9b6-8561523c2d9d"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.177892 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-systemd-units\") pod \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.177951 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgcnl\" (UniqueName: \"kubernetes.io/projected/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-kube-api-access-hgcnl\") pod \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.178070 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-run-ovn\") pod \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.178111 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-etc-openvswitch\") pod \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.178155 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-var-lib-openvswitch\") pod \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.178187 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-cni-netd\") pod \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.178220 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-slash\") pod \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.178279 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-ovnkube-script-lib\") pod \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\" (UID: \"b434fe8e-c4c2-4979-a9b6-8561523c2d9d\") " Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.178272 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "b434fe8e-c4c2-4979-a9b6-8561523c2d9d" (UID: "b434fe8e-c4c2-4979-a9b6-8561523c2d9d"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.178337 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-log-socket" (OuterVolumeSpecName: "log-socket") pod "b434fe8e-c4c2-4979-a9b6-8561523c2d9d" (UID: "b434fe8e-c4c2-4979-a9b6-8561523c2d9d"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.178355 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "b434fe8e-c4c2-4979-a9b6-8561523c2d9d" (UID: "b434fe8e-c4c2-4979-a9b6-8561523c2d9d"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.178392 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "b434fe8e-c4c2-4979-a9b6-8561523c2d9d" (UID: "b434fe8e-c4c2-4979-a9b6-8561523c2d9d"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.178413 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "b434fe8e-c4c2-4979-a9b6-8561523c2d9d" (UID: "b434fe8e-c4c2-4979-a9b6-8561523c2d9d"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.178421 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "b434fe8e-c4c2-4979-a9b6-8561523c2d9d" (UID: "b434fe8e-c4c2-4979-a9b6-8561523c2d9d"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.178472 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "b434fe8e-c4c2-4979-a9b6-8561523c2d9d" (UID: "b434fe8e-c4c2-4979-a9b6-8561523c2d9d"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.178513 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "b434fe8e-c4c2-4979-a9b6-8561523c2d9d" (UID: "b434fe8e-c4c2-4979-a9b6-8561523c2d9d"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.178515 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "b434fe8e-c4c2-4979-a9b6-8561523c2d9d" (UID: "b434fe8e-c4c2-4979-a9b6-8561523c2d9d"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.178566 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-slash" (OuterVolumeSpecName: "host-slash") pod "b434fe8e-c4c2-4979-a9b6-8561523c2d9d" (UID: "b434fe8e-c4c2-4979-a9b6-8561523c2d9d"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.178589 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "b434fe8e-c4c2-4979-a9b6-8561523c2d9d" (UID: "b434fe8e-c4c2-4979-a9b6-8561523c2d9d"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.178933 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "b434fe8e-c4c2-4979-a9b6-8561523c2d9d" (UID: "b434fe8e-c4c2-4979-a9b6-8561523c2d9d"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.179182 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "b434fe8e-c4c2-4979-a9b6-8561523c2d9d" (UID: "b434fe8e-c4c2-4979-a9b6-8561523c2d9d"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.179392 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "b434fe8e-c4c2-4979-a9b6-8561523c2d9d" (UID: "b434fe8e-c4c2-4979-a9b6-8561523c2d9d"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.180726 4689 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.180765 4689 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.180787 4689 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-log-socket\") on node \"crc\" DevicePath \"\"" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.180805 4689 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.180824 4689 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-node-log\") on node \"crc\" DevicePath \"\"" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.180842 4689 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.180858 4689 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.180876 4689 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.180894 4689 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.180912 4689 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.180932 4689 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.180952 4689 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.181215 4689 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.181247 4689 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.181265 4689 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.181284 4689 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.181301 4689 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-host-slash\") on node \"crc\" DevicePath \"\"" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.184212 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-kube-api-access-hgcnl" (OuterVolumeSpecName: "kube-api-access-hgcnl") pod "b434fe8e-c4c2-4979-a9b6-8561523c2d9d" (UID: "b434fe8e-c4c2-4979-a9b6-8561523c2d9d"). InnerVolumeSpecName "kube-api-access-hgcnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.184231 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "b434fe8e-c4c2-4979-a9b6-8561523c2d9d" (UID: "b434fe8e-c4c2-4979-a9b6-8561523c2d9d"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.194561 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "b434fe8e-c4c2-4979-a9b6-8561523c2d9d" (UID: "b434fe8e-c4c2-4979-a9b6-8561523c2d9d"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.282566 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-host-kubelet\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.282640 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.282816 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-run-systemd\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.282878 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4416d03c-c689-4c75-8019-a05b4e70c782-ovnkube-script-lib\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.282924 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-host-slash\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.283088 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4416d03c-c689-4c75-8019-a05b4e70c782-ovn-node-metrics-cert\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.283165 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-host-run-netns\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.283211 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-host-cni-netd\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.283316 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-host-cni-bin\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.283375 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-run-ovn\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.283416 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-var-lib-openvswitch\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.283440 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-node-log\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.283514 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7tcs\" (UniqueName: \"kubernetes.io/projected/4416d03c-c689-4c75-8019-a05b4e70c782-kube-api-access-p7tcs\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.283546 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-etc-openvswitch\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.283636 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-host-run-ovn-kubernetes\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.283666 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4416d03c-c689-4c75-8019-a05b4e70c782-ovnkube-config\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.283716 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-log-socket\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.283763 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-systemd-units\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.283803 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4416d03c-c689-4c75-8019-a05b4e70c782-env-overrides\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.283849 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-run-openvswitch\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.283996 4689 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.284017 4689 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.284033 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgcnl\" (UniqueName: \"kubernetes.io/projected/b434fe8e-c4c2-4979-a9b6-8561523c2d9d-kube-api-access-hgcnl\") on node \"crc\" DevicePath \"\"" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.385122 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-run-openvswitch\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.385192 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-host-kubelet\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.385231 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.385329 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-run-systemd\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.385382 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4416d03c-c689-4c75-8019-a05b4e70c782-ovnkube-script-lib\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.385389 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-host-kubelet\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.385424 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-host-slash\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.385394 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.385468 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4416d03c-c689-4c75-8019-a05b4e70c782-ovn-node-metrics-cert\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.385490 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-host-slash\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.385279 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-run-openvswitch\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.385497 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-run-systemd\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.385555 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-host-run-netns\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.385508 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-host-run-netns\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.385631 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-host-cni-netd\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.385677 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-host-cni-bin\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.385726 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-run-ovn\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.385755 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-host-cni-netd\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.385775 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-var-lib-openvswitch\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.385792 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-host-cni-bin\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.385808 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-node-log\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.385839 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-run-ovn\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.385848 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-var-lib-openvswitch\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.385882 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7tcs\" (UniqueName: \"kubernetes.io/projected/4416d03c-c689-4c75-8019-a05b4e70c782-kube-api-access-p7tcs\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.385913 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-node-log\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.385916 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-etc-openvswitch\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.385955 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-etc-openvswitch\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.386126 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-host-run-ovn-kubernetes\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.386184 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4416d03c-c689-4c75-8019-a05b4e70c782-ovnkube-config\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.386225 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-log-socket\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.386295 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-systemd-units\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.386366 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4416d03c-c689-4c75-8019-a05b4e70c782-env-overrides\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.386607 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-host-run-ovn-kubernetes\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.386628 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-log-socket\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.386680 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4416d03c-c689-4c75-8019-a05b4e70c782-systemd-units\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.387469 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4416d03c-c689-4c75-8019-a05b4e70c782-env-overrides\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.387831 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4416d03c-c689-4c75-8019-a05b4e70c782-ovnkube-config\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.388014 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4416d03c-c689-4c75-8019-a05b4e70c782-ovnkube-script-lib\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.391489 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4416d03c-c689-4c75-8019-a05b4e70c782-ovn-node-metrics-cert\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.422366 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7tcs\" (UniqueName: \"kubernetes.io/projected/4416d03c-c689-4c75-8019-a05b4e70c782-kube-api-access-p7tcs\") pod \"ovnkube-node-5cmb5\" (UID: \"4416d03c-c689-4c75-8019-a05b4e70c782\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.434476 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.465866 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r6wmt_3713b4f8-2ee3-4078-859a-dca17076f9a6/kube-multus/2.log" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.466724 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r6wmt_3713b4f8-2ee3-4078-859a-dca17076f9a6/kube-multus/1.log" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.466793 4689 generic.go:334] "Generic (PLEG): container finished" podID="3713b4f8-2ee3-4078-859a-dca17076f9a6" containerID="155a09b4097a5aad76a37c3319f3a1e4925daeeba3803b5adba74775d48e8d02" exitCode=2 Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.466907 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r6wmt" event={"ID":"3713b4f8-2ee3-4078-859a-dca17076f9a6","Type":"ContainerDied","Data":"155a09b4097a5aad76a37c3319f3a1e4925daeeba3803b5adba74775d48e8d02"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.467009 4689 scope.go:117] "RemoveContainer" containerID="60b9e7f27ed577b35c0ecc9f2f205e63e25d0fc8c22e4e49e13cd395ff14a83e" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.467703 4689 scope.go:117] "RemoveContainer" containerID="155a09b4097a5aad76a37c3319f3a1e4925daeeba3803b5adba74775d48e8d02" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.480731 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d5s24_b434fe8e-c4c2-4979-a9b6-8561523c2d9d/ovnkube-controller/3.log" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.486325 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d5s24_b434fe8e-c4c2-4979-a9b6-8561523c2d9d/ovn-acl-logging/0.log" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.487558 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d5s24_b434fe8e-c4c2-4979-a9b6-8561523c2d9d/ovn-controller/0.log" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488269 4689 generic.go:334] "Generic (PLEG): container finished" podID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerID="6c431d5019498721f833fd36391e4a59baa27e8b5605feec6ae956436a5391a2" exitCode=0 Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488312 4689 generic.go:334] "Generic (PLEG): container finished" podID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerID="e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c" exitCode=0 Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488333 4689 generic.go:334] "Generic (PLEG): container finished" podID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerID="6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301" exitCode=0 Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488353 4689 generic.go:334] "Generic (PLEG): container finished" podID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerID="4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07" exitCode=0 Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488368 4689 generic.go:334] "Generic (PLEG): container finished" podID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerID="69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21" exitCode=0 Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488389 4689 generic.go:334] "Generic (PLEG): container finished" podID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerID="f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a" exitCode=0 Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488417 4689 generic.go:334] "Generic (PLEG): container finished" podID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerID="eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9" exitCode=143 Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488375 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" event={"ID":"b434fe8e-c4c2-4979-a9b6-8561523c2d9d","Type":"ContainerDied","Data":"6c431d5019498721f833fd36391e4a59baa27e8b5605feec6ae956436a5391a2"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488477 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488489 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" event={"ID":"b434fe8e-c4c2-4979-a9b6-8561523c2d9d","Type":"ContainerDied","Data":"e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488547 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" event={"ID":"b434fe8e-c4c2-4979-a9b6-8561523c2d9d","Type":"ContainerDied","Data":"6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488569 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" event={"ID":"b434fe8e-c4c2-4979-a9b6-8561523c2d9d","Type":"ContainerDied","Data":"4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488600 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" event={"ID":"b434fe8e-c4c2-4979-a9b6-8561523c2d9d","Type":"ContainerDied","Data":"69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488620 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" event={"ID":"b434fe8e-c4c2-4979-a9b6-8561523c2d9d","Type":"ContainerDied","Data":"f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488640 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6c431d5019498721f833fd36391e4a59baa27e8b5605feec6ae956436a5391a2"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488659 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488673 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488686 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488698 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488709 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488720 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488731 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488747 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488434 4689 generic.go:334] "Generic (PLEG): container finished" podID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" containerID="17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723" exitCode=143 Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488760 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488812 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" event={"ID":"b434fe8e-c4c2-4979-a9b6-8561523c2d9d","Type":"ContainerDied","Data":"eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488848 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6c431d5019498721f833fd36391e4a59baa27e8b5605feec6ae956436a5391a2"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488865 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488879 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488891 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488903 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488915 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488926 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488938 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488949 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.488960 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.489011 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" event={"ID":"b434fe8e-c4c2-4979-a9b6-8561523c2d9d","Type":"ContainerDied","Data":"17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.489030 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6c431d5019498721f833fd36391e4a59baa27e8b5605feec6ae956436a5391a2"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.489044 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.489061 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.489077 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.489092 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.489108 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.489122 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.489137 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.489153 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.489168 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.489190 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5s24" event={"ID":"b434fe8e-c4c2-4979-a9b6-8561523c2d9d","Type":"ContainerDied","Data":"7af4fd28dd28c02132a7dad257583be4949813113eee241d6cd3e57007ab8ee7"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.489216 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6c431d5019498721f833fd36391e4a59baa27e8b5605feec6ae956436a5391a2"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.489234 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.489250 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.489302 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.489319 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.489333 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.489347 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.489363 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.489377 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.489391 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137"} Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.527526 4689 scope.go:117] "RemoveContainer" containerID="6c431d5019498721f833fd36391e4a59baa27e8b5605feec6ae956436a5391a2" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.554210 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-d5s24"] Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.558130 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-d5s24"] Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.563032 4689 scope.go:117] "RemoveContainer" containerID="bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.590343 4689 scope.go:117] "RemoveContainer" containerID="e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.615053 4689 scope.go:117] "RemoveContainer" containerID="6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.632202 4689 scope.go:117] "RemoveContainer" containerID="4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.648056 4689 scope.go:117] "RemoveContainer" containerID="69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.722631 4689 scope.go:117] "RemoveContainer" containerID="f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.739091 4689 scope.go:117] "RemoveContainer" containerID="eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.759833 4689 scope.go:117] "RemoveContainer" containerID="17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.776190 4689 scope.go:117] "RemoveContainer" containerID="b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.791099 4689 scope.go:117] "RemoveContainer" containerID="6c431d5019498721f833fd36391e4a59baa27e8b5605feec6ae956436a5391a2" Dec 10 12:27:58 crc kubenswrapper[4689]: E1210 12:27:58.791466 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c431d5019498721f833fd36391e4a59baa27e8b5605feec6ae956436a5391a2\": container with ID starting with 6c431d5019498721f833fd36391e4a59baa27e8b5605feec6ae956436a5391a2 not found: ID does not exist" containerID="6c431d5019498721f833fd36391e4a59baa27e8b5605feec6ae956436a5391a2" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.791510 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c431d5019498721f833fd36391e4a59baa27e8b5605feec6ae956436a5391a2"} err="failed to get container status \"6c431d5019498721f833fd36391e4a59baa27e8b5605feec6ae956436a5391a2\": rpc error: code = NotFound desc = could not find container \"6c431d5019498721f833fd36391e4a59baa27e8b5605feec6ae956436a5391a2\": container with ID starting with 6c431d5019498721f833fd36391e4a59baa27e8b5605feec6ae956436a5391a2 not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.791529 4689 scope.go:117] "RemoveContainer" containerID="bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1" Dec 10 12:27:58 crc kubenswrapper[4689]: E1210 12:27:58.791805 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1\": container with ID starting with bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1 not found: ID does not exist" containerID="bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.791848 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1"} err="failed to get container status \"bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1\": rpc error: code = NotFound desc = could not find container \"bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1\": container with ID starting with bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1 not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.791887 4689 scope.go:117] "RemoveContainer" containerID="e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c" Dec 10 12:27:58 crc kubenswrapper[4689]: E1210 12:27:58.792370 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\": container with ID starting with e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c not found: ID does not exist" containerID="e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.792412 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c"} err="failed to get container status \"e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\": rpc error: code = NotFound desc = could not find container \"e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\": container with ID starting with e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.792426 4689 scope.go:117] "RemoveContainer" containerID="6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301" Dec 10 12:27:58 crc kubenswrapper[4689]: E1210 12:27:58.792702 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\": container with ID starting with 6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301 not found: ID does not exist" containerID="6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.792738 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301"} err="failed to get container status \"6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\": rpc error: code = NotFound desc = could not find container \"6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\": container with ID starting with 6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301 not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.792761 4689 scope.go:117] "RemoveContainer" containerID="4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07" Dec 10 12:27:58 crc kubenswrapper[4689]: E1210 12:27:58.793034 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\": container with ID starting with 4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07 not found: ID does not exist" containerID="4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.793081 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07"} err="failed to get container status \"4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\": rpc error: code = NotFound desc = could not find container \"4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\": container with ID starting with 4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07 not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.793097 4689 scope.go:117] "RemoveContainer" containerID="69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21" Dec 10 12:27:58 crc kubenswrapper[4689]: E1210 12:27:58.793486 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\": container with ID starting with 69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21 not found: ID does not exist" containerID="69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.793508 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21"} err="failed to get container status \"69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\": rpc error: code = NotFound desc = could not find container \"69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\": container with ID starting with 69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21 not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.793537 4689 scope.go:117] "RemoveContainer" containerID="f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a" Dec 10 12:27:58 crc kubenswrapper[4689]: E1210 12:27:58.793828 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\": container with ID starting with f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a not found: ID does not exist" containerID="f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.793861 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a"} err="failed to get container status \"f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\": rpc error: code = NotFound desc = could not find container \"f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\": container with ID starting with f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.793878 4689 scope.go:117] "RemoveContainer" containerID="eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9" Dec 10 12:27:58 crc kubenswrapper[4689]: E1210 12:27:58.794220 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\": container with ID starting with eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9 not found: ID does not exist" containerID="eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.794260 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9"} err="failed to get container status \"eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\": rpc error: code = NotFound desc = could not find container \"eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\": container with ID starting with eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9 not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.794273 4689 scope.go:117] "RemoveContainer" containerID="17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723" Dec 10 12:27:58 crc kubenswrapper[4689]: E1210 12:27:58.794492 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\": container with ID starting with 17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723 not found: ID does not exist" containerID="17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.794523 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723"} err="failed to get container status \"17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\": rpc error: code = NotFound desc = could not find container \"17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\": container with ID starting with 17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723 not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.794544 4689 scope.go:117] "RemoveContainer" containerID="b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137" Dec 10 12:27:58 crc kubenswrapper[4689]: E1210 12:27:58.794842 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\": container with ID starting with b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137 not found: ID does not exist" containerID="b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.794876 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137"} err="failed to get container status \"b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\": rpc error: code = NotFound desc = could not find container \"b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\": container with ID starting with b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137 not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.794897 4689 scope.go:117] "RemoveContainer" containerID="6c431d5019498721f833fd36391e4a59baa27e8b5605feec6ae956436a5391a2" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.795259 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c431d5019498721f833fd36391e4a59baa27e8b5605feec6ae956436a5391a2"} err="failed to get container status \"6c431d5019498721f833fd36391e4a59baa27e8b5605feec6ae956436a5391a2\": rpc error: code = NotFound desc = could not find container \"6c431d5019498721f833fd36391e4a59baa27e8b5605feec6ae956436a5391a2\": container with ID starting with 6c431d5019498721f833fd36391e4a59baa27e8b5605feec6ae956436a5391a2 not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.795284 4689 scope.go:117] "RemoveContainer" containerID="bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.795606 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1"} err="failed to get container status \"bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1\": rpc error: code = NotFound desc = could not find container \"bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1\": container with ID starting with bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1 not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.795636 4689 scope.go:117] "RemoveContainer" containerID="e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.797294 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c"} err="failed to get container status \"e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\": rpc error: code = NotFound desc = could not find container \"e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\": container with ID starting with e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.797361 4689 scope.go:117] "RemoveContainer" containerID="6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.797703 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301"} err="failed to get container status \"6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\": rpc error: code = NotFound desc = could not find container \"6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\": container with ID starting with 6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301 not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.797736 4689 scope.go:117] "RemoveContainer" containerID="4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.798076 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07"} err="failed to get container status \"4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\": rpc error: code = NotFound desc = could not find container \"4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\": container with ID starting with 4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07 not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.798116 4689 scope.go:117] "RemoveContainer" containerID="69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.798443 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21"} err="failed to get container status \"69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\": rpc error: code = NotFound desc = could not find container \"69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\": container with ID starting with 69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21 not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.798476 4689 scope.go:117] "RemoveContainer" containerID="f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.798866 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a"} err="failed to get container status \"f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\": rpc error: code = NotFound desc = could not find container \"f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\": container with ID starting with f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.798888 4689 scope.go:117] "RemoveContainer" containerID="eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.799145 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9"} err="failed to get container status \"eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\": rpc error: code = NotFound desc = could not find container \"eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\": container with ID starting with eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9 not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.799175 4689 scope.go:117] "RemoveContainer" containerID="17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.799421 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723"} err="failed to get container status \"17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\": rpc error: code = NotFound desc = could not find container \"17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\": container with ID starting with 17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723 not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.799453 4689 scope.go:117] "RemoveContainer" containerID="b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.799942 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137"} err="failed to get container status \"b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\": rpc error: code = NotFound desc = could not find container \"b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\": container with ID starting with b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137 not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.800008 4689 scope.go:117] "RemoveContainer" containerID="6c431d5019498721f833fd36391e4a59baa27e8b5605feec6ae956436a5391a2" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.800223 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c431d5019498721f833fd36391e4a59baa27e8b5605feec6ae956436a5391a2"} err="failed to get container status \"6c431d5019498721f833fd36391e4a59baa27e8b5605feec6ae956436a5391a2\": rpc error: code = NotFound desc = could not find container \"6c431d5019498721f833fd36391e4a59baa27e8b5605feec6ae956436a5391a2\": container with ID starting with 6c431d5019498721f833fd36391e4a59baa27e8b5605feec6ae956436a5391a2 not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.800256 4689 scope.go:117] "RemoveContainer" containerID="bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.800699 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1"} err="failed to get container status \"bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1\": rpc error: code = NotFound desc = could not find container \"bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1\": container with ID starting with bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1 not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.800738 4689 scope.go:117] "RemoveContainer" containerID="e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.801069 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c"} err="failed to get container status \"e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\": rpc error: code = NotFound desc = could not find container \"e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\": container with ID starting with e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.801111 4689 scope.go:117] "RemoveContainer" containerID="6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.801474 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301"} err="failed to get container status \"6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\": rpc error: code = NotFound desc = could not find container \"6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\": container with ID starting with 6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301 not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.801503 4689 scope.go:117] "RemoveContainer" containerID="4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.801907 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07"} err="failed to get container status \"4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\": rpc error: code = NotFound desc = could not find container \"4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\": container with ID starting with 4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07 not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.801935 4689 scope.go:117] "RemoveContainer" containerID="69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.802278 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21"} err="failed to get container status \"69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\": rpc error: code = NotFound desc = could not find container \"69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\": container with ID starting with 69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21 not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.802303 4689 scope.go:117] "RemoveContainer" containerID="f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.802601 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a"} err="failed to get container status \"f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\": rpc error: code = NotFound desc = could not find container \"f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\": container with ID starting with f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.802621 4689 scope.go:117] "RemoveContainer" containerID="eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.802897 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9"} err="failed to get container status \"eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\": rpc error: code = NotFound desc = could not find container \"eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\": container with ID starting with eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9 not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.802921 4689 scope.go:117] "RemoveContainer" containerID="17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.803290 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723"} err="failed to get container status \"17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\": rpc error: code = NotFound desc = could not find container \"17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\": container with ID starting with 17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723 not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.803311 4689 scope.go:117] "RemoveContainer" containerID="b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.803594 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137"} err="failed to get container status \"b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\": rpc error: code = NotFound desc = could not find container \"b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\": container with ID starting with b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137 not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.803622 4689 scope.go:117] "RemoveContainer" containerID="6c431d5019498721f833fd36391e4a59baa27e8b5605feec6ae956436a5391a2" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.803908 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c431d5019498721f833fd36391e4a59baa27e8b5605feec6ae956436a5391a2"} err="failed to get container status \"6c431d5019498721f833fd36391e4a59baa27e8b5605feec6ae956436a5391a2\": rpc error: code = NotFound desc = could not find container \"6c431d5019498721f833fd36391e4a59baa27e8b5605feec6ae956436a5391a2\": container with ID starting with 6c431d5019498721f833fd36391e4a59baa27e8b5605feec6ae956436a5391a2 not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.803931 4689 scope.go:117] "RemoveContainer" containerID="bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.804197 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1"} err="failed to get container status \"bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1\": rpc error: code = NotFound desc = could not find container \"bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1\": container with ID starting with bb00374f8054c3041d91465c99271185be6fdb258a541ba83aed5bb1059178e1 not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.804220 4689 scope.go:117] "RemoveContainer" containerID="e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.804466 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c"} err="failed to get container status \"e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\": rpc error: code = NotFound desc = could not find container \"e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c\": container with ID starting with e28b779b8a8693084ab0997decc53b3a8eba61bef24ac90ff251d915f086568c not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.804486 4689 scope.go:117] "RemoveContainer" containerID="6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.804708 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301"} err="failed to get container status \"6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\": rpc error: code = NotFound desc = could not find container \"6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301\": container with ID starting with 6523d85b2b13f0025f328e66fc54d988a262d66575656c50d792bebc6a422301 not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.804736 4689 scope.go:117] "RemoveContainer" containerID="4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.805041 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07"} err="failed to get container status \"4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\": rpc error: code = NotFound desc = could not find container \"4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07\": container with ID starting with 4faf578e1c2dcab798cd856befbd2dfc061f7e003ead2304d437041d5ee1dd07 not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.805069 4689 scope.go:117] "RemoveContainer" containerID="69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.805366 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21"} err="failed to get container status \"69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\": rpc error: code = NotFound desc = could not find container \"69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21\": container with ID starting with 69f1e8960af0a51f61c14bd54dc3d3be7189911242699061b6a5e265bf23da21 not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.805392 4689 scope.go:117] "RemoveContainer" containerID="f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.805697 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a"} err="failed to get container status \"f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\": rpc error: code = NotFound desc = could not find container \"f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a\": container with ID starting with f4dd21d5fefe31117dd9f13896400256a11840488886f8455dc177c0389e942a not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.805719 4689 scope.go:117] "RemoveContainer" containerID="eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.806143 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9"} err="failed to get container status \"eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\": rpc error: code = NotFound desc = could not find container \"eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9\": container with ID starting with eebe28cf16767ec011f1e571cdb5f6babf931f4444e92fccf2f1bb5aa5deeec9 not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.806172 4689 scope.go:117] "RemoveContainer" containerID="17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.806426 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723"} err="failed to get container status \"17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\": rpc error: code = NotFound desc = could not find container \"17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723\": container with ID starting with 17d173f3388ef7471022163cbdf3e0a2454e42b1ed621b15512414504b2d9723 not found: ID does not exist" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.806448 4689 scope.go:117] "RemoveContainer" containerID="b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137" Dec 10 12:27:58 crc kubenswrapper[4689]: I1210 12:27:58.806751 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137"} err="failed to get container status \"b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\": rpc error: code = NotFound desc = could not find container \"b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137\": container with ID starting with b5b9a4f871b33ab8d7dbb8148e8a118858ae0d529510a8aebda0a3d9db97c137 not found: ID does not exist" Dec 10 12:27:59 crc kubenswrapper[4689]: I1210 12:27:59.498506 4689 generic.go:334] "Generic (PLEG): container finished" podID="4416d03c-c689-4c75-8019-a05b4e70c782" containerID="462557c66c4c13362c7d7a796560132b558abaabc2f1b1ad9f6a8c9a3470e456" exitCode=0 Dec 10 12:27:59 crc kubenswrapper[4689]: I1210 12:27:59.498642 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" event={"ID":"4416d03c-c689-4c75-8019-a05b4e70c782","Type":"ContainerDied","Data":"462557c66c4c13362c7d7a796560132b558abaabc2f1b1ad9f6a8c9a3470e456"} Dec 10 12:27:59 crc kubenswrapper[4689]: I1210 12:27:59.499136 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" event={"ID":"4416d03c-c689-4c75-8019-a05b4e70c782","Type":"ContainerStarted","Data":"a1f9d243aef582847a146a077e7e055b89bbd589635fdcf9ab8e4228e4546173"} Dec 10 12:27:59 crc kubenswrapper[4689]: I1210 12:27:59.504042 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r6wmt_3713b4f8-2ee3-4078-859a-dca17076f9a6/kube-multus/2.log" Dec 10 12:27:59 crc kubenswrapper[4689]: I1210 12:27:59.504118 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r6wmt" event={"ID":"3713b4f8-2ee3-4078-859a-dca17076f9a6","Type":"ContainerStarted","Data":"ebc2393ee70ecdfe182ce7f762defb61709ca57140d43b65f0064efde344e58c"} Dec 10 12:28:00 crc kubenswrapper[4689]: I1210 12:28:00.507477 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b434fe8e-c4c2-4979-a9b6-8561523c2d9d" path="/var/lib/kubelet/pods/b434fe8e-c4c2-4979-a9b6-8561523c2d9d/volumes" Dec 10 12:28:00 crc kubenswrapper[4689]: I1210 12:28:00.511716 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" event={"ID":"4416d03c-c689-4c75-8019-a05b4e70c782","Type":"ContainerStarted","Data":"dfd18f773c81b9862dc296d93afe65ce632884d08328ac1fd066398cb2f856af"} Dec 10 12:28:00 crc kubenswrapper[4689]: I1210 12:28:00.511759 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" event={"ID":"4416d03c-c689-4c75-8019-a05b4e70c782","Type":"ContainerStarted","Data":"0c7644f356b48e2f5a767d25bbc92267fbbd6fe979091a6a0c01250731dbd65d"} Dec 10 12:28:00 crc kubenswrapper[4689]: I1210 12:28:00.511773 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" event={"ID":"4416d03c-c689-4c75-8019-a05b4e70c782","Type":"ContainerStarted","Data":"94b0af9082dad45467d0c1e4b6363388304a842115ea5574f470883018e482fc"} Dec 10 12:28:00 crc kubenswrapper[4689]: I1210 12:28:00.511784 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" event={"ID":"4416d03c-c689-4c75-8019-a05b4e70c782","Type":"ContainerStarted","Data":"3bbe53928d29cacf38221780b3b5c113c0f999fb9e8236985c4c0fa6e378311c"} Dec 10 12:28:00 crc kubenswrapper[4689]: I1210 12:28:00.511796 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" event={"ID":"4416d03c-c689-4c75-8019-a05b4e70c782","Type":"ContainerStarted","Data":"b5dadceacb119469e421ea4faa50a79997b81c8d992e8889893b692f5c6d2a98"} Dec 10 12:28:00 crc kubenswrapper[4689]: I1210 12:28:00.511808 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" event={"ID":"4416d03c-c689-4c75-8019-a05b4e70c782","Type":"ContainerStarted","Data":"354a50e1724ea233abf2007e2724eb1bbf8991b3b6e815e8ed7f03f55e1157f4"} Dec 10 12:28:03 crc kubenswrapper[4689]: I1210 12:28:03.533929 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" event={"ID":"4416d03c-c689-4c75-8019-a05b4e70c782","Type":"ContainerStarted","Data":"419ec3f5079a795795189dc345158862a45fd51074d9ab0b72ad962d5fe81eca"} Dec 10 12:28:07 crc kubenswrapper[4689]: I1210 12:28:07.166872 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 12:28:07 crc kubenswrapper[4689]: I1210 12:28:07.167164 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 12:28:07 crc kubenswrapper[4689]: I1210 12:28:07.563623 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" event={"ID":"4416d03c-c689-4c75-8019-a05b4e70c782","Type":"ContainerStarted","Data":"18a8139284e4ebfad35910d32bb307a82d936e11d568791ff574da886adca622"} Dec 10 12:28:07 crc kubenswrapper[4689]: I1210 12:28:07.564010 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:28:07 crc kubenswrapper[4689]: I1210 12:28:07.564049 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:28:07 crc kubenswrapper[4689]: I1210 12:28:07.564072 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:28:07 crc kubenswrapper[4689]: I1210 12:28:07.668209 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:28:07 crc kubenswrapper[4689]: I1210 12:28:07.673832 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:28:07 crc kubenswrapper[4689]: I1210 12:28:07.682528 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" podStartSLOduration=9.682510195999999 podStartE2EDuration="9.682510196s" podCreationTimestamp="2025-12-10 12:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:28:07.679632434 +0000 UTC m=+755.467713592" watchObservedRunningTime="2025-12-10 12:28:07.682510196 +0000 UTC m=+755.470591334" Dec 10 12:28:09 crc kubenswrapper[4689]: I1210 12:28:09.288272 4689 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 10 12:28:28 crc kubenswrapper[4689]: I1210 12:28:28.468241 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5cmb5" Dec 10 12:28:33 crc kubenswrapper[4689]: I1210 12:28:33.290312 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww"] Dec 10 12:28:33 crc kubenswrapper[4689]: I1210 12:28:33.292549 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww" Dec 10 12:28:33 crc kubenswrapper[4689]: I1210 12:28:33.296562 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 10 12:28:33 crc kubenswrapper[4689]: I1210 12:28:33.304738 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww"] Dec 10 12:28:33 crc kubenswrapper[4689]: I1210 12:28:33.468870 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0d799972-d396-4dd5-8186-87f0a51ea145-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww\" (UID: \"0d799972-d396-4dd5-8186-87f0a51ea145\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww" Dec 10 12:28:33 crc kubenswrapper[4689]: I1210 12:28:33.469022 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh2hw\" (UniqueName: \"kubernetes.io/projected/0d799972-d396-4dd5-8186-87f0a51ea145-kube-api-access-xh2hw\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww\" (UID: \"0d799972-d396-4dd5-8186-87f0a51ea145\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww" Dec 10 12:28:33 crc kubenswrapper[4689]: I1210 12:28:33.469074 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0d799972-d396-4dd5-8186-87f0a51ea145-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww\" (UID: \"0d799972-d396-4dd5-8186-87f0a51ea145\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww" Dec 10 12:28:33 crc kubenswrapper[4689]: I1210 12:28:33.570476 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0d799972-d396-4dd5-8186-87f0a51ea145-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww\" (UID: \"0d799972-d396-4dd5-8186-87f0a51ea145\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww" Dec 10 12:28:33 crc kubenswrapper[4689]: I1210 12:28:33.570606 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh2hw\" (UniqueName: \"kubernetes.io/projected/0d799972-d396-4dd5-8186-87f0a51ea145-kube-api-access-xh2hw\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww\" (UID: \"0d799972-d396-4dd5-8186-87f0a51ea145\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww" Dec 10 12:28:33 crc kubenswrapper[4689]: I1210 12:28:33.570661 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0d799972-d396-4dd5-8186-87f0a51ea145-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww\" (UID: \"0d799972-d396-4dd5-8186-87f0a51ea145\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww" Dec 10 12:28:33 crc kubenswrapper[4689]: I1210 12:28:33.571373 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0d799972-d396-4dd5-8186-87f0a51ea145-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww\" (UID: \"0d799972-d396-4dd5-8186-87f0a51ea145\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww" Dec 10 12:28:33 crc kubenswrapper[4689]: I1210 12:28:33.571415 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0d799972-d396-4dd5-8186-87f0a51ea145-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww\" (UID: \"0d799972-d396-4dd5-8186-87f0a51ea145\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww" Dec 10 12:28:33 crc kubenswrapper[4689]: I1210 12:28:33.610828 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh2hw\" (UniqueName: \"kubernetes.io/projected/0d799972-d396-4dd5-8186-87f0a51ea145-kube-api-access-xh2hw\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww\" (UID: \"0d799972-d396-4dd5-8186-87f0a51ea145\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww" Dec 10 12:28:33 crc kubenswrapper[4689]: I1210 12:28:33.612758 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww" Dec 10 12:28:34 crc kubenswrapper[4689]: I1210 12:28:34.126110 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww"] Dec 10 12:28:34 crc kubenswrapper[4689]: I1210 12:28:34.743673 4689 generic.go:334] "Generic (PLEG): container finished" podID="0d799972-d396-4dd5-8186-87f0a51ea145" containerID="34508e535074b54042ad2f6426c1632a8d49e27851e9f8c8a52856eb9175e13f" exitCode=0 Dec 10 12:28:34 crc kubenswrapper[4689]: I1210 12:28:34.743730 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww" event={"ID":"0d799972-d396-4dd5-8186-87f0a51ea145","Type":"ContainerDied","Data":"34508e535074b54042ad2f6426c1632a8d49e27851e9f8c8a52856eb9175e13f"} Dec 10 12:28:34 crc kubenswrapper[4689]: I1210 12:28:34.743763 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww" event={"ID":"0d799972-d396-4dd5-8186-87f0a51ea145","Type":"ContainerStarted","Data":"6ba77c127f3810eb418de6cd59165161af1fbac8e35d967c5d8ac869be235338"} Dec 10 12:28:35 crc kubenswrapper[4689]: I1210 12:28:35.612736 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sb99b"] Dec 10 12:28:35 crc kubenswrapper[4689]: I1210 12:28:35.614840 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sb99b" Dec 10 12:28:35 crc kubenswrapper[4689]: I1210 12:28:35.635506 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sb99b"] Dec 10 12:28:35 crc kubenswrapper[4689]: I1210 12:28:35.700553 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59c9a037-78c1-4c47-8c5d-b8491959c657-catalog-content\") pod \"redhat-operators-sb99b\" (UID: \"59c9a037-78c1-4c47-8c5d-b8491959c657\") " pod="openshift-marketplace/redhat-operators-sb99b" Dec 10 12:28:35 crc kubenswrapper[4689]: I1210 12:28:35.700610 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59c9a037-78c1-4c47-8c5d-b8491959c657-utilities\") pod \"redhat-operators-sb99b\" (UID: \"59c9a037-78c1-4c47-8c5d-b8491959c657\") " pod="openshift-marketplace/redhat-operators-sb99b" Dec 10 12:28:35 crc kubenswrapper[4689]: I1210 12:28:35.700642 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmf6g\" (UniqueName: \"kubernetes.io/projected/59c9a037-78c1-4c47-8c5d-b8491959c657-kube-api-access-rmf6g\") pod \"redhat-operators-sb99b\" (UID: \"59c9a037-78c1-4c47-8c5d-b8491959c657\") " pod="openshift-marketplace/redhat-operators-sb99b" Dec 10 12:28:35 crc kubenswrapper[4689]: I1210 12:28:35.801475 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59c9a037-78c1-4c47-8c5d-b8491959c657-catalog-content\") pod \"redhat-operators-sb99b\" (UID: \"59c9a037-78c1-4c47-8c5d-b8491959c657\") " pod="openshift-marketplace/redhat-operators-sb99b" Dec 10 12:28:35 crc kubenswrapper[4689]: I1210 12:28:35.801585 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59c9a037-78c1-4c47-8c5d-b8491959c657-utilities\") pod \"redhat-operators-sb99b\" (UID: \"59c9a037-78c1-4c47-8c5d-b8491959c657\") " pod="openshift-marketplace/redhat-operators-sb99b" Dec 10 12:28:35 crc kubenswrapper[4689]: I1210 12:28:35.802394 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59c9a037-78c1-4c47-8c5d-b8491959c657-utilities\") pod \"redhat-operators-sb99b\" (UID: \"59c9a037-78c1-4c47-8c5d-b8491959c657\") " pod="openshift-marketplace/redhat-operators-sb99b" Dec 10 12:28:35 crc kubenswrapper[4689]: I1210 12:28:35.802389 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59c9a037-78c1-4c47-8c5d-b8491959c657-catalog-content\") pod \"redhat-operators-sb99b\" (UID: \"59c9a037-78c1-4c47-8c5d-b8491959c657\") " pod="openshift-marketplace/redhat-operators-sb99b" Dec 10 12:28:35 crc kubenswrapper[4689]: I1210 12:28:35.802481 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmf6g\" (UniqueName: \"kubernetes.io/projected/59c9a037-78c1-4c47-8c5d-b8491959c657-kube-api-access-rmf6g\") pod \"redhat-operators-sb99b\" (UID: \"59c9a037-78c1-4c47-8c5d-b8491959c657\") " pod="openshift-marketplace/redhat-operators-sb99b" Dec 10 12:28:35 crc kubenswrapper[4689]: I1210 12:28:35.839621 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmf6g\" (UniqueName: \"kubernetes.io/projected/59c9a037-78c1-4c47-8c5d-b8491959c657-kube-api-access-rmf6g\") pod \"redhat-operators-sb99b\" (UID: \"59c9a037-78c1-4c47-8c5d-b8491959c657\") " pod="openshift-marketplace/redhat-operators-sb99b" Dec 10 12:28:35 crc kubenswrapper[4689]: I1210 12:28:35.948198 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sb99b" Dec 10 12:28:36 crc kubenswrapper[4689]: I1210 12:28:36.175132 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sb99b"] Dec 10 12:28:36 crc kubenswrapper[4689]: I1210 12:28:36.757876 4689 generic.go:334] "Generic (PLEG): container finished" podID="59c9a037-78c1-4c47-8c5d-b8491959c657" containerID="45557321b36669e54522e9dfa98c81e1415c6775c5e8a08a57ed721c0ddbaec9" exitCode=0 Dec 10 12:28:36 crc kubenswrapper[4689]: I1210 12:28:36.758012 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sb99b" event={"ID":"59c9a037-78c1-4c47-8c5d-b8491959c657","Type":"ContainerDied","Data":"45557321b36669e54522e9dfa98c81e1415c6775c5e8a08a57ed721c0ddbaec9"} Dec 10 12:28:36 crc kubenswrapper[4689]: I1210 12:28:36.758052 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sb99b" event={"ID":"59c9a037-78c1-4c47-8c5d-b8491959c657","Type":"ContainerStarted","Data":"8c8676701caa1fd42b7fb739e76a21900ec860f2a002dd7db5f019c0edec1b28"} Dec 10 12:28:37 crc kubenswrapper[4689]: I1210 12:28:37.166371 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 12:28:37 crc kubenswrapper[4689]: I1210 12:28:37.166429 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 12:28:37 crc kubenswrapper[4689]: I1210 12:28:37.166480 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" Dec 10 12:28:37 crc kubenswrapper[4689]: I1210 12:28:37.166928 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d20271d773850f4d5b2fbccca8a9391a64d881b36edb8636961a3fdb4367ab8"} pod="openshift-machine-config-operator/machine-config-daemon-db6zk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 12:28:37 crc kubenswrapper[4689]: I1210 12:28:37.167005 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" containerID="cri-o://1d20271d773850f4d5b2fbccca8a9391a64d881b36edb8636961a3fdb4367ab8" gracePeriod=600 Dec 10 12:28:37 crc kubenswrapper[4689]: I1210 12:28:37.770704 4689 generic.go:334] "Generic (PLEG): container finished" podID="a41ebdcd-910f-4669-992d-296e1a92162b" containerID="1d20271d773850f4d5b2fbccca8a9391a64d881b36edb8636961a3fdb4367ab8" exitCode=0 Dec 10 12:28:37 crc kubenswrapper[4689]: I1210 12:28:37.771248 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" event={"ID":"a41ebdcd-910f-4669-992d-296e1a92162b","Type":"ContainerDied","Data":"1d20271d773850f4d5b2fbccca8a9391a64d881b36edb8636961a3fdb4367ab8"} Dec 10 12:28:37 crc kubenswrapper[4689]: I1210 12:28:37.771391 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" event={"ID":"a41ebdcd-910f-4669-992d-296e1a92162b","Type":"ContainerStarted","Data":"6e5c15a4c10b86079bc45e52f2bd74ade92056d772116ec22ae9d0a1a5a11fd9"} Dec 10 12:28:37 crc kubenswrapper[4689]: I1210 12:28:37.771429 4689 scope.go:117] "RemoveContainer" containerID="b330b5eb64e249789ebd7ac5911e2ea7484fa0071beacdd7f1062b6742924ad6" Dec 10 12:28:37 crc kubenswrapper[4689]: I1210 12:28:37.778134 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sb99b" event={"ID":"59c9a037-78c1-4c47-8c5d-b8491959c657","Type":"ContainerStarted","Data":"0f9069dbfa14c3b3a244580d4a9f1cb2bc67693903aac705e389657459836dc0"} Dec 10 12:28:38 crc kubenswrapper[4689]: I1210 12:28:38.794119 4689 generic.go:334] "Generic (PLEG): container finished" podID="59c9a037-78c1-4c47-8c5d-b8491959c657" containerID="0f9069dbfa14c3b3a244580d4a9f1cb2bc67693903aac705e389657459836dc0" exitCode=0 Dec 10 12:28:38 crc kubenswrapper[4689]: I1210 12:28:38.794180 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sb99b" event={"ID":"59c9a037-78c1-4c47-8c5d-b8491959c657","Type":"ContainerDied","Data":"0f9069dbfa14c3b3a244580d4a9f1cb2bc67693903aac705e389657459836dc0"} Dec 10 12:28:39 crc kubenswrapper[4689]: I1210 12:28:39.804298 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sb99b" event={"ID":"59c9a037-78c1-4c47-8c5d-b8491959c657","Type":"ContainerStarted","Data":"cc478a851a220595a645503788d6630082a6427c338ef32f4ab24ec171ff4977"} Dec 10 12:28:39 crc kubenswrapper[4689]: I1210 12:28:39.838149 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sb99b" podStartSLOduration=2.051783769 podStartE2EDuration="4.838118649s" podCreationTimestamp="2025-12-10 12:28:35 +0000 UTC" firstStartedPulling="2025-12-10 12:28:36.760572592 +0000 UTC m=+784.548653770" lastFinishedPulling="2025-12-10 12:28:39.546907482 +0000 UTC m=+787.334988650" observedRunningTime="2025-12-10 12:28:39.831952565 +0000 UTC m=+787.620033763" watchObservedRunningTime="2025-12-10 12:28:39.838118649 +0000 UTC m=+787.626199827" Dec 10 12:28:42 crc kubenswrapper[4689]: I1210 12:28:42.834776 4689 generic.go:334] "Generic (PLEG): container finished" podID="0d799972-d396-4dd5-8186-87f0a51ea145" containerID="530932f51ef5c2bd780e53fc63d0053e886656e24354f19f0baf8492fe1a273f" exitCode=0 Dec 10 12:28:42 crc kubenswrapper[4689]: I1210 12:28:42.834822 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww" event={"ID":"0d799972-d396-4dd5-8186-87f0a51ea145","Type":"ContainerDied","Data":"530932f51ef5c2bd780e53fc63d0053e886656e24354f19f0baf8492fe1a273f"} Dec 10 12:28:43 crc kubenswrapper[4689]: I1210 12:28:43.845938 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww" event={"ID":"0d799972-d396-4dd5-8186-87f0a51ea145","Type":"ContainerStarted","Data":"6972842f85afec8e5ce65e69a46c8b140ad974631018c201a83fc60a069677f4"} Dec 10 12:28:43 crc kubenswrapper[4689]: I1210 12:28:43.867562 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww" podStartSLOduration=3.407528748 podStartE2EDuration="10.867536545s" podCreationTimestamp="2025-12-10 12:28:33 +0000 UTC" firstStartedPulling="2025-12-10 12:28:34.745346736 +0000 UTC m=+782.533427884" lastFinishedPulling="2025-12-10 12:28:42.205354543 +0000 UTC m=+789.993435681" observedRunningTime="2025-12-10 12:28:43.864676943 +0000 UTC m=+791.652758162" watchObservedRunningTime="2025-12-10 12:28:43.867536545 +0000 UTC m=+791.655617693" Dec 10 12:28:44 crc kubenswrapper[4689]: I1210 12:28:44.856477 4689 generic.go:334] "Generic (PLEG): container finished" podID="0d799972-d396-4dd5-8186-87f0a51ea145" containerID="6972842f85afec8e5ce65e69a46c8b140ad974631018c201a83fc60a069677f4" exitCode=0 Dec 10 12:28:44 crc kubenswrapper[4689]: I1210 12:28:44.856625 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww" event={"ID":"0d799972-d396-4dd5-8186-87f0a51ea145","Type":"ContainerDied","Data":"6972842f85afec8e5ce65e69a46c8b140ad974631018c201a83fc60a069677f4"} Dec 10 12:28:45 crc kubenswrapper[4689]: I1210 12:28:45.949190 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sb99b" Dec 10 12:28:45 crc kubenswrapper[4689]: I1210 12:28:45.949610 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sb99b" Dec 10 12:28:46 crc kubenswrapper[4689]: I1210 12:28:46.157917 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww" Dec 10 12:28:46 crc kubenswrapper[4689]: I1210 12:28:46.308062 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0d799972-d396-4dd5-8186-87f0a51ea145-util\") pod \"0d799972-d396-4dd5-8186-87f0a51ea145\" (UID: \"0d799972-d396-4dd5-8186-87f0a51ea145\") " Dec 10 12:28:46 crc kubenswrapper[4689]: I1210 12:28:46.308188 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh2hw\" (UniqueName: \"kubernetes.io/projected/0d799972-d396-4dd5-8186-87f0a51ea145-kube-api-access-xh2hw\") pod \"0d799972-d396-4dd5-8186-87f0a51ea145\" (UID: \"0d799972-d396-4dd5-8186-87f0a51ea145\") " Dec 10 12:28:46 crc kubenswrapper[4689]: I1210 12:28:46.308238 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0d799972-d396-4dd5-8186-87f0a51ea145-bundle\") pod \"0d799972-d396-4dd5-8186-87f0a51ea145\" (UID: \"0d799972-d396-4dd5-8186-87f0a51ea145\") " Dec 10 12:28:46 crc kubenswrapper[4689]: I1210 12:28:46.309231 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d799972-d396-4dd5-8186-87f0a51ea145-bundle" (OuterVolumeSpecName: "bundle") pod "0d799972-d396-4dd5-8186-87f0a51ea145" (UID: "0d799972-d396-4dd5-8186-87f0a51ea145"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:28:46 crc kubenswrapper[4689]: I1210 12:28:46.317927 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d799972-d396-4dd5-8186-87f0a51ea145-util" (OuterVolumeSpecName: "util") pod "0d799972-d396-4dd5-8186-87f0a51ea145" (UID: "0d799972-d396-4dd5-8186-87f0a51ea145"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:28:46 crc kubenswrapper[4689]: I1210 12:28:46.322440 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d799972-d396-4dd5-8186-87f0a51ea145-kube-api-access-xh2hw" (OuterVolumeSpecName: "kube-api-access-xh2hw") pod "0d799972-d396-4dd5-8186-87f0a51ea145" (UID: "0d799972-d396-4dd5-8186-87f0a51ea145"). InnerVolumeSpecName "kube-api-access-xh2hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:28:46 crc kubenswrapper[4689]: I1210 12:28:46.409809 4689 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0d799972-d396-4dd5-8186-87f0a51ea145-util\") on node \"crc\" DevicePath \"\"" Dec 10 12:28:46 crc kubenswrapper[4689]: I1210 12:28:46.409870 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh2hw\" (UniqueName: \"kubernetes.io/projected/0d799972-d396-4dd5-8186-87f0a51ea145-kube-api-access-xh2hw\") on node \"crc\" DevicePath \"\"" Dec 10 12:28:46 crc kubenswrapper[4689]: I1210 12:28:46.409893 4689 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0d799972-d396-4dd5-8186-87f0a51ea145-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:28:46 crc kubenswrapper[4689]: I1210 12:28:46.873723 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww" event={"ID":"0d799972-d396-4dd5-8186-87f0a51ea145","Type":"ContainerDied","Data":"6ba77c127f3810eb418de6cd59165161af1fbac8e35d967c5d8ac869be235338"} Dec 10 12:28:46 crc kubenswrapper[4689]: I1210 12:28:46.874150 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ba77c127f3810eb418de6cd59165161af1fbac8e35d967c5d8ac869be235338" Dec 10 12:28:46 crc kubenswrapper[4689]: I1210 12:28:46.873816 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww" Dec 10 12:28:47 crc kubenswrapper[4689]: I1210 12:28:47.023507 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sb99b" podUID="59c9a037-78c1-4c47-8c5d-b8491959c657" containerName="registry-server" probeResult="failure" output=< Dec 10 12:28:47 crc kubenswrapper[4689]: timeout: failed to connect service ":50051" within 1s Dec 10 12:28:47 crc kubenswrapper[4689]: > Dec 10 12:28:49 crc kubenswrapper[4689]: I1210 12:28:49.897825 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-5ffhx"] Dec 10 12:28:49 crc kubenswrapper[4689]: E1210 12:28:49.898304 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d799972-d396-4dd5-8186-87f0a51ea145" containerName="util" Dec 10 12:28:49 crc kubenswrapper[4689]: I1210 12:28:49.898316 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d799972-d396-4dd5-8186-87f0a51ea145" containerName="util" Dec 10 12:28:49 crc kubenswrapper[4689]: E1210 12:28:49.898330 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d799972-d396-4dd5-8186-87f0a51ea145" containerName="extract" Dec 10 12:28:49 crc kubenswrapper[4689]: I1210 12:28:49.898335 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d799972-d396-4dd5-8186-87f0a51ea145" containerName="extract" Dec 10 12:28:49 crc kubenswrapper[4689]: E1210 12:28:49.898354 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d799972-d396-4dd5-8186-87f0a51ea145" containerName="pull" Dec 10 12:28:49 crc kubenswrapper[4689]: I1210 12:28:49.898359 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d799972-d396-4dd5-8186-87f0a51ea145" containerName="pull" Dec 10 12:28:49 crc kubenswrapper[4689]: I1210 12:28:49.898450 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d799972-d396-4dd5-8186-87f0a51ea145" containerName="extract" Dec 10 12:28:49 crc kubenswrapper[4689]: I1210 12:28:49.898793 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-5ffhx" Dec 10 12:28:49 crc kubenswrapper[4689]: I1210 12:28:49.901858 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-zjztc" Dec 10 12:28:49 crc kubenswrapper[4689]: I1210 12:28:49.902844 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 10 12:28:49 crc kubenswrapper[4689]: I1210 12:28:49.902859 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 10 12:28:49 crc kubenswrapper[4689]: I1210 12:28:49.916485 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-5ffhx"] Dec 10 12:28:50 crc kubenswrapper[4689]: I1210 12:28:50.058534 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n8fp\" (UniqueName: \"kubernetes.io/projected/2c83e0d9-b9ea-4b8a-9f11-7921eff53640-kube-api-access-2n8fp\") pod \"nmstate-operator-5b5b58f5c8-5ffhx\" (UID: \"2c83e0d9-b9ea-4b8a-9f11-7921eff53640\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-5ffhx" Dec 10 12:28:50 crc kubenswrapper[4689]: I1210 12:28:50.159790 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n8fp\" (UniqueName: \"kubernetes.io/projected/2c83e0d9-b9ea-4b8a-9f11-7921eff53640-kube-api-access-2n8fp\") pod \"nmstate-operator-5b5b58f5c8-5ffhx\" (UID: \"2c83e0d9-b9ea-4b8a-9f11-7921eff53640\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-5ffhx" Dec 10 12:28:50 crc kubenswrapper[4689]: I1210 12:28:50.198907 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n8fp\" (UniqueName: \"kubernetes.io/projected/2c83e0d9-b9ea-4b8a-9f11-7921eff53640-kube-api-access-2n8fp\") pod \"nmstate-operator-5b5b58f5c8-5ffhx\" (UID: \"2c83e0d9-b9ea-4b8a-9f11-7921eff53640\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-5ffhx" Dec 10 12:28:50 crc kubenswrapper[4689]: I1210 12:28:50.218881 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-5ffhx" Dec 10 12:28:50 crc kubenswrapper[4689]: I1210 12:28:50.535500 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-5ffhx"] Dec 10 12:28:50 crc kubenswrapper[4689]: I1210 12:28:50.899145 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-5ffhx" event={"ID":"2c83e0d9-b9ea-4b8a-9f11-7921eff53640","Type":"ContainerStarted","Data":"2c7e77eaf17e7d237d7e2943b3e3897bcdc0df7719d0503c2fd3ac29591fd6d9"} Dec 10 12:28:53 crc kubenswrapper[4689]: I1210 12:28:53.917951 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-5ffhx" event={"ID":"2c83e0d9-b9ea-4b8a-9f11-7921eff53640","Type":"ContainerStarted","Data":"cc6988ffe3c048ab33c9287df94d150507633fea5a87f8f834a44dbb4ddd6be7"} Dec 10 12:28:53 crc kubenswrapper[4689]: I1210 12:28:53.938664 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-5ffhx" podStartSLOduration=1.8682533399999999 podStartE2EDuration="4.938631369s" podCreationTimestamp="2025-12-10 12:28:49 +0000 UTC" firstStartedPulling="2025-12-10 12:28:50.523565666 +0000 UTC m=+798.311646834" lastFinishedPulling="2025-12-10 12:28:53.593943725 +0000 UTC m=+801.382024863" observedRunningTime="2025-12-10 12:28:53.937745648 +0000 UTC m=+801.725826816" watchObservedRunningTime="2025-12-10 12:28:53.938631369 +0000 UTC m=+801.726712557" Dec 10 12:28:54 crc kubenswrapper[4689]: I1210 12:28:54.914915 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-sf9vn"] Dec 10 12:28:54 crc kubenswrapper[4689]: I1210 12:28:54.916276 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-sf9vn" Dec 10 12:28:54 crc kubenswrapper[4689]: I1210 12:28:54.918999 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-8jmcz" Dec 10 12:28:54 crc kubenswrapper[4689]: I1210 12:28:54.940138 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-sf9vn"] Dec 10 12:28:54 crc kubenswrapper[4689]: I1210 12:28:54.961081 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wghvl"] Dec 10 12:28:54 crc kubenswrapper[4689]: I1210 12:28:54.961834 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wghvl" Dec 10 12:28:54 crc kubenswrapper[4689]: I1210 12:28:54.964325 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-mvj78"] Dec 10 12:28:54 crc kubenswrapper[4689]: I1210 12:28:54.964435 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 10 12:28:54 crc kubenswrapper[4689]: I1210 12:28:54.965098 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-mvj78" Dec 10 12:28:54 crc kubenswrapper[4689]: I1210 12:28:54.994984 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wghvl"] Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.025213 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7vnr\" (UniqueName: \"kubernetes.io/projected/dfc59f59-cc82-4d61-ac2b-e9dbf2b0f5fd-kube-api-access-d7vnr\") pod \"nmstate-metrics-7f946cbc9-sf9vn\" (UID: \"dfc59f59-cc82-4d61-ac2b-e9dbf2b0f5fd\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-sf9vn" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.059421 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-97b9n"] Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.060132 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-97b9n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.061994 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.064405 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-xm7nh" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.064497 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.069086 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-97b9n"] Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.126366 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/7f919f52-0709-4db5-8158-8be0da507d54-ovs-socket\") pod \"nmstate-handler-mvj78\" (UID: \"7f919f52-0709-4db5-8158-8be0da507d54\") " pod="openshift-nmstate/nmstate-handler-mvj78" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.126401 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/7f919f52-0709-4db5-8158-8be0da507d54-dbus-socket\") pod \"nmstate-handler-mvj78\" (UID: \"7f919f52-0709-4db5-8158-8be0da507d54\") " pod="openshift-nmstate/nmstate-handler-mvj78" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.126429 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6b4e153d-87c2-4ebb-b47c-77f12331ab68-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-wghvl\" (UID: \"6b4e153d-87c2-4ebb-b47c-77f12331ab68\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wghvl" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.126458 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7vnr\" (UniqueName: \"kubernetes.io/projected/dfc59f59-cc82-4d61-ac2b-e9dbf2b0f5fd-kube-api-access-d7vnr\") pod \"nmstate-metrics-7f946cbc9-sf9vn\" (UID: \"dfc59f59-cc82-4d61-ac2b-e9dbf2b0f5fd\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-sf9vn" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.126573 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwkbt\" (UniqueName: \"kubernetes.io/projected/7f919f52-0709-4db5-8158-8be0da507d54-kube-api-access-gwkbt\") pod \"nmstate-handler-mvj78\" (UID: \"7f919f52-0709-4db5-8158-8be0da507d54\") " pod="openshift-nmstate/nmstate-handler-mvj78" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.126786 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/7f919f52-0709-4db5-8158-8be0da507d54-nmstate-lock\") pod \"nmstate-handler-mvj78\" (UID: \"7f919f52-0709-4db5-8158-8be0da507d54\") " pod="openshift-nmstate/nmstate-handler-mvj78" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.126838 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4svc\" (UniqueName: \"kubernetes.io/projected/6b4e153d-87c2-4ebb-b47c-77f12331ab68-kube-api-access-t4svc\") pod \"nmstate-webhook-5f6d4c5ccb-wghvl\" (UID: \"6b4e153d-87c2-4ebb-b47c-77f12331ab68\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wghvl" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.155262 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7vnr\" (UniqueName: \"kubernetes.io/projected/dfc59f59-cc82-4d61-ac2b-e9dbf2b0f5fd-kube-api-access-d7vnr\") pod \"nmstate-metrics-7f946cbc9-sf9vn\" (UID: \"dfc59f59-cc82-4d61-ac2b-e9dbf2b0f5fd\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-sf9vn" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.227690 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwkbt\" (UniqueName: \"kubernetes.io/projected/7f919f52-0709-4db5-8158-8be0da507d54-kube-api-access-gwkbt\") pod \"nmstate-handler-mvj78\" (UID: \"7f919f52-0709-4db5-8158-8be0da507d54\") " pod="openshift-nmstate/nmstate-handler-mvj78" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.227793 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/7f919f52-0709-4db5-8158-8be0da507d54-nmstate-lock\") pod \"nmstate-handler-mvj78\" (UID: \"7f919f52-0709-4db5-8158-8be0da507d54\") " pod="openshift-nmstate/nmstate-handler-mvj78" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.227827 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4svc\" (UniqueName: \"kubernetes.io/projected/6b4e153d-87c2-4ebb-b47c-77f12331ab68-kube-api-access-t4svc\") pod \"nmstate-webhook-5f6d4c5ccb-wghvl\" (UID: \"6b4e153d-87c2-4ebb-b47c-77f12331ab68\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wghvl" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.227854 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b3a78d11-7f23-4c35-aee3-0b2a7a19a041-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-97b9n\" (UID: \"b3a78d11-7f23-4c35-aee3-0b2a7a19a041\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-97b9n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.227889 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/7f919f52-0709-4db5-8158-8be0da507d54-dbus-socket\") pod \"nmstate-handler-mvj78\" (UID: \"7f919f52-0709-4db5-8158-8be0da507d54\") " pod="openshift-nmstate/nmstate-handler-mvj78" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.227910 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/7f919f52-0709-4db5-8158-8be0da507d54-ovs-socket\") pod \"nmstate-handler-mvj78\" (UID: \"7f919f52-0709-4db5-8158-8be0da507d54\") " pod="openshift-nmstate/nmstate-handler-mvj78" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.227933 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hjtn\" (UniqueName: \"kubernetes.io/projected/b3a78d11-7f23-4c35-aee3-0b2a7a19a041-kube-api-access-4hjtn\") pod \"nmstate-console-plugin-7fbb5f6569-97b9n\" (UID: \"b3a78d11-7f23-4c35-aee3-0b2a7a19a041\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-97b9n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.227936 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/7f919f52-0709-4db5-8158-8be0da507d54-nmstate-lock\") pod \"nmstate-handler-mvj78\" (UID: \"7f919f52-0709-4db5-8158-8be0da507d54\") " pod="openshift-nmstate/nmstate-handler-mvj78" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.227961 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3a78d11-7f23-4c35-aee3-0b2a7a19a041-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-97b9n\" (UID: \"b3a78d11-7f23-4c35-aee3-0b2a7a19a041\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-97b9n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.228005 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/7f919f52-0709-4db5-8158-8be0da507d54-ovs-socket\") pod \"nmstate-handler-mvj78\" (UID: \"7f919f52-0709-4db5-8158-8be0da507d54\") " pod="openshift-nmstate/nmstate-handler-mvj78" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.228051 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6b4e153d-87c2-4ebb-b47c-77f12331ab68-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-wghvl\" (UID: \"6b4e153d-87c2-4ebb-b47c-77f12331ab68\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wghvl" Dec 10 12:28:55 crc kubenswrapper[4689]: E1210 12:28:55.228147 4689 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Dec 10 12:28:55 crc kubenswrapper[4689]: E1210 12:28:55.228204 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b4e153d-87c2-4ebb-b47c-77f12331ab68-tls-key-pair podName:6b4e153d-87c2-4ebb-b47c-77f12331ab68 nodeName:}" failed. No retries permitted until 2025-12-10 12:28:55.728186397 +0000 UTC m=+803.516267545 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/6b4e153d-87c2-4ebb-b47c-77f12331ab68-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-wghvl" (UID: "6b4e153d-87c2-4ebb-b47c-77f12331ab68") : secret "openshift-nmstate-webhook" not found Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.228227 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/7f919f52-0709-4db5-8158-8be0da507d54-dbus-socket\") pod \"nmstate-handler-mvj78\" (UID: \"7f919f52-0709-4db5-8158-8be0da507d54\") " pod="openshift-nmstate/nmstate-handler-mvj78" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.236611 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-sf9vn" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.248339 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6978598799-bkv7n"] Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.249074 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6978598799-bkv7n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.260210 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4svc\" (UniqueName: \"kubernetes.io/projected/6b4e153d-87c2-4ebb-b47c-77f12331ab68-kube-api-access-t4svc\") pod \"nmstate-webhook-5f6d4c5ccb-wghvl\" (UID: \"6b4e153d-87c2-4ebb-b47c-77f12331ab68\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wghvl" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.271634 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwkbt\" (UniqueName: \"kubernetes.io/projected/7f919f52-0709-4db5-8158-8be0da507d54-kube-api-access-gwkbt\") pod \"nmstate-handler-mvj78\" (UID: \"7f919f52-0709-4db5-8158-8be0da507d54\") " pod="openshift-nmstate/nmstate-handler-mvj78" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.267959 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6978598799-bkv7n"] Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.292238 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-mvj78" Dec 10 12:28:55 crc kubenswrapper[4689]: W1210 12:28:55.322109 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f919f52_0709_4db5_8158_8be0da507d54.slice/crio-a2d6670098a70e56aec0fadc868e5d15e5692f4cab5831788f757ceddaab5fec WatchSource:0}: Error finding container a2d6670098a70e56aec0fadc868e5d15e5692f4cab5831788f757ceddaab5fec: Status 404 returned error can't find the container with id a2d6670098a70e56aec0fadc868e5d15e5692f4cab5831788f757ceddaab5fec Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.328618 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a60b53a1-5ada-448c-a2eb-27c546f76473-service-ca\") pod \"console-6978598799-bkv7n\" (UID: \"a60b53a1-5ada-448c-a2eb-27c546f76473\") " pod="openshift-console/console-6978598799-bkv7n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.328659 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a60b53a1-5ada-448c-a2eb-27c546f76473-oauth-serving-cert\") pod \"console-6978598799-bkv7n\" (UID: \"a60b53a1-5ada-448c-a2eb-27c546f76473\") " pod="openshift-console/console-6978598799-bkv7n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.328698 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hjtn\" (UniqueName: \"kubernetes.io/projected/b3a78d11-7f23-4c35-aee3-0b2a7a19a041-kube-api-access-4hjtn\") pod \"nmstate-console-plugin-7fbb5f6569-97b9n\" (UID: \"b3a78d11-7f23-4c35-aee3-0b2a7a19a041\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-97b9n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.328726 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3a78d11-7f23-4c35-aee3-0b2a7a19a041-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-97b9n\" (UID: \"b3a78d11-7f23-4c35-aee3-0b2a7a19a041\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-97b9n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.328837 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a60b53a1-5ada-448c-a2eb-27c546f76473-trusted-ca-bundle\") pod \"console-6978598799-bkv7n\" (UID: \"a60b53a1-5ada-448c-a2eb-27c546f76473\") " pod="openshift-console/console-6978598799-bkv7n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.328855 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a60b53a1-5ada-448c-a2eb-27c546f76473-console-serving-cert\") pod \"console-6978598799-bkv7n\" (UID: \"a60b53a1-5ada-448c-a2eb-27c546f76473\") " pod="openshift-console/console-6978598799-bkv7n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.328941 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a60b53a1-5ada-448c-a2eb-27c546f76473-console-config\") pod \"console-6978598799-bkv7n\" (UID: \"a60b53a1-5ada-448c-a2eb-27c546f76473\") " pod="openshift-console/console-6978598799-bkv7n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.328961 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a60b53a1-5ada-448c-a2eb-27c546f76473-console-oauth-config\") pod \"console-6978598799-bkv7n\" (UID: \"a60b53a1-5ada-448c-a2eb-27c546f76473\") " pod="openshift-console/console-6978598799-bkv7n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.329001 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b3a78d11-7f23-4c35-aee3-0b2a7a19a041-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-97b9n\" (UID: \"b3a78d11-7f23-4c35-aee3-0b2a7a19a041\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-97b9n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.329018 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px72n\" (UniqueName: \"kubernetes.io/projected/a60b53a1-5ada-448c-a2eb-27c546f76473-kube-api-access-px72n\") pod \"console-6978598799-bkv7n\" (UID: \"a60b53a1-5ada-448c-a2eb-27c546f76473\") " pod="openshift-console/console-6978598799-bkv7n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.330612 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b3a78d11-7f23-4c35-aee3-0b2a7a19a041-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-97b9n\" (UID: \"b3a78d11-7f23-4c35-aee3-0b2a7a19a041\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-97b9n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.336953 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3a78d11-7f23-4c35-aee3-0b2a7a19a041-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-97b9n\" (UID: \"b3a78d11-7f23-4c35-aee3-0b2a7a19a041\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-97b9n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.352835 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hjtn\" (UniqueName: \"kubernetes.io/projected/b3a78d11-7f23-4c35-aee3-0b2a7a19a041-kube-api-access-4hjtn\") pod \"nmstate-console-plugin-7fbb5f6569-97b9n\" (UID: \"b3a78d11-7f23-4c35-aee3-0b2a7a19a041\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-97b9n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.374700 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-97b9n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.429877 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a60b53a1-5ada-448c-a2eb-27c546f76473-trusted-ca-bundle\") pod \"console-6978598799-bkv7n\" (UID: \"a60b53a1-5ada-448c-a2eb-27c546f76473\") " pod="openshift-console/console-6978598799-bkv7n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.429914 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a60b53a1-5ada-448c-a2eb-27c546f76473-console-serving-cert\") pod \"console-6978598799-bkv7n\" (UID: \"a60b53a1-5ada-448c-a2eb-27c546f76473\") " pod="openshift-console/console-6978598799-bkv7n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.429956 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a60b53a1-5ada-448c-a2eb-27c546f76473-console-config\") pod \"console-6978598799-bkv7n\" (UID: \"a60b53a1-5ada-448c-a2eb-27c546f76473\") " pod="openshift-console/console-6978598799-bkv7n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.429986 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a60b53a1-5ada-448c-a2eb-27c546f76473-console-oauth-config\") pod \"console-6978598799-bkv7n\" (UID: \"a60b53a1-5ada-448c-a2eb-27c546f76473\") " pod="openshift-console/console-6978598799-bkv7n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.430006 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px72n\" (UniqueName: \"kubernetes.io/projected/a60b53a1-5ada-448c-a2eb-27c546f76473-kube-api-access-px72n\") pod \"console-6978598799-bkv7n\" (UID: \"a60b53a1-5ada-448c-a2eb-27c546f76473\") " pod="openshift-console/console-6978598799-bkv7n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.430025 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a60b53a1-5ada-448c-a2eb-27c546f76473-service-ca\") pod \"console-6978598799-bkv7n\" (UID: \"a60b53a1-5ada-448c-a2eb-27c546f76473\") " pod="openshift-console/console-6978598799-bkv7n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.430040 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a60b53a1-5ada-448c-a2eb-27c546f76473-oauth-serving-cert\") pod \"console-6978598799-bkv7n\" (UID: \"a60b53a1-5ada-448c-a2eb-27c546f76473\") " pod="openshift-console/console-6978598799-bkv7n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.430878 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a60b53a1-5ada-448c-a2eb-27c546f76473-oauth-serving-cert\") pod \"console-6978598799-bkv7n\" (UID: \"a60b53a1-5ada-448c-a2eb-27c546f76473\") " pod="openshift-console/console-6978598799-bkv7n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.432667 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a60b53a1-5ada-448c-a2eb-27c546f76473-console-config\") pod \"console-6978598799-bkv7n\" (UID: \"a60b53a1-5ada-448c-a2eb-27c546f76473\") " pod="openshift-console/console-6978598799-bkv7n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.433655 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a60b53a1-5ada-448c-a2eb-27c546f76473-service-ca\") pod \"console-6978598799-bkv7n\" (UID: \"a60b53a1-5ada-448c-a2eb-27c546f76473\") " pod="openshift-console/console-6978598799-bkv7n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.433670 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a60b53a1-5ada-448c-a2eb-27c546f76473-trusted-ca-bundle\") pod \"console-6978598799-bkv7n\" (UID: \"a60b53a1-5ada-448c-a2eb-27c546f76473\") " pod="openshift-console/console-6978598799-bkv7n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.435437 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a60b53a1-5ada-448c-a2eb-27c546f76473-console-serving-cert\") pod \"console-6978598799-bkv7n\" (UID: \"a60b53a1-5ada-448c-a2eb-27c546f76473\") " pod="openshift-console/console-6978598799-bkv7n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.437558 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a60b53a1-5ada-448c-a2eb-27c546f76473-console-oauth-config\") pod \"console-6978598799-bkv7n\" (UID: \"a60b53a1-5ada-448c-a2eb-27c546f76473\") " pod="openshift-console/console-6978598799-bkv7n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.448571 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px72n\" (UniqueName: \"kubernetes.io/projected/a60b53a1-5ada-448c-a2eb-27c546f76473-kube-api-access-px72n\") pod \"console-6978598799-bkv7n\" (UID: \"a60b53a1-5ada-448c-a2eb-27c546f76473\") " pod="openshift-console/console-6978598799-bkv7n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.483557 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-sf9vn"] Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.565592 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-97b9n"] Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.604487 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6978598799-bkv7n" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.732735 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6b4e153d-87c2-4ebb-b47c-77f12331ab68-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-wghvl\" (UID: \"6b4e153d-87c2-4ebb-b47c-77f12331ab68\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wghvl" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.736793 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6b4e153d-87c2-4ebb-b47c-77f12331ab68-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-wghvl\" (UID: \"6b4e153d-87c2-4ebb-b47c-77f12331ab68\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wghvl" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.882759 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wghvl" Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.931500 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-sf9vn" event={"ID":"dfc59f59-cc82-4d61-ac2b-e9dbf2b0f5fd","Type":"ContainerStarted","Data":"60f1b45858a9d613731b14b15a29fdaaed51818691fb5aad15303e7a9c6fcaee"} Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.933350 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-mvj78" event={"ID":"7f919f52-0709-4db5-8158-8be0da507d54","Type":"ContainerStarted","Data":"a2d6670098a70e56aec0fadc868e5d15e5692f4cab5831788f757ceddaab5fec"} Dec 10 12:28:55 crc kubenswrapper[4689]: I1210 12:28:55.934654 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-97b9n" event={"ID":"b3a78d11-7f23-4c35-aee3-0b2a7a19a041","Type":"ContainerStarted","Data":"0852a52e09b0b30bc65c640ee1de3d27ffdd4ea1265a24296de1afca1df71fdb"} Dec 10 12:28:56 crc kubenswrapper[4689]: I1210 12:28:56.013739 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6978598799-bkv7n"] Dec 10 12:28:56 crc kubenswrapper[4689]: I1210 12:28:56.017933 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sb99b" Dec 10 12:28:56 crc kubenswrapper[4689]: I1210 12:28:56.087947 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sb99b" Dec 10 12:28:56 crc kubenswrapper[4689]: I1210 12:28:56.125445 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wghvl"] Dec 10 12:28:56 crc kubenswrapper[4689]: W1210 12:28:56.126918 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b4e153d_87c2_4ebb_b47c_77f12331ab68.slice/crio-09d15d97d17f737904a943c1abea10bbea074c083ae95b9af2dc72ebcc6f011c WatchSource:0}: Error finding container 09d15d97d17f737904a943c1abea10bbea074c083ae95b9af2dc72ebcc6f011c: Status 404 returned error can't find the container with id 09d15d97d17f737904a943c1abea10bbea074c083ae95b9af2dc72ebcc6f011c Dec 10 12:28:56 crc kubenswrapper[4689]: I1210 12:28:56.942163 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wghvl" event={"ID":"6b4e153d-87c2-4ebb-b47c-77f12331ab68","Type":"ContainerStarted","Data":"09d15d97d17f737904a943c1abea10bbea074c083ae95b9af2dc72ebcc6f011c"} Dec 10 12:28:56 crc kubenswrapper[4689]: I1210 12:28:56.943858 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6978598799-bkv7n" event={"ID":"a60b53a1-5ada-448c-a2eb-27c546f76473","Type":"ContainerStarted","Data":"12fb2a73fb8974a294588213f70382978e5ecc902ac61d515f957c65989badfc"} Dec 10 12:28:56 crc kubenswrapper[4689]: I1210 12:28:56.943892 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6978598799-bkv7n" event={"ID":"a60b53a1-5ada-448c-a2eb-27c546f76473","Type":"ContainerStarted","Data":"497f5bc4e34eb8d1375c9adc8940ed5c44cd8be034250e8605495ca31bf2e2b8"} Dec 10 12:28:56 crc kubenswrapper[4689]: I1210 12:28:56.976447 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6978598799-bkv7n" podStartSLOduration=1.976406782 podStartE2EDuration="1.976406782s" podCreationTimestamp="2025-12-10 12:28:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:28:56.974536715 +0000 UTC m=+804.762617853" watchObservedRunningTime="2025-12-10 12:28:56.976406782 +0000 UTC m=+804.764487920" Dec 10 12:28:58 crc kubenswrapper[4689]: I1210 12:28:58.235007 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sb99b"] Dec 10 12:28:58 crc kubenswrapper[4689]: I1210 12:28:58.235763 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sb99b" podUID="59c9a037-78c1-4c47-8c5d-b8491959c657" containerName="registry-server" containerID="cri-o://cc478a851a220595a645503788d6630082a6427c338ef32f4ab24ec171ff4977" gracePeriod=2 Dec 10 12:28:58 crc kubenswrapper[4689]: I1210 12:28:58.611254 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sb99b" Dec 10 12:28:58 crc kubenswrapper[4689]: I1210 12:28:58.774596 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59c9a037-78c1-4c47-8c5d-b8491959c657-utilities\") pod \"59c9a037-78c1-4c47-8c5d-b8491959c657\" (UID: \"59c9a037-78c1-4c47-8c5d-b8491959c657\") " Dec 10 12:28:58 crc kubenswrapper[4689]: I1210 12:28:58.774720 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmf6g\" (UniqueName: \"kubernetes.io/projected/59c9a037-78c1-4c47-8c5d-b8491959c657-kube-api-access-rmf6g\") pod \"59c9a037-78c1-4c47-8c5d-b8491959c657\" (UID: \"59c9a037-78c1-4c47-8c5d-b8491959c657\") " Dec 10 12:28:58 crc kubenswrapper[4689]: I1210 12:28:58.774894 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59c9a037-78c1-4c47-8c5d-b8491959c657-catalog-content\") pod \"59c9a037-78c1-4c47-8c5d-b8491959c657\" (UID: \"59c9a037-78c1-4c47-8c5d-b8491959c657\") " Dec 10 12:28:58 crc kubenswrapper[4689]: I1210 12:28:58.777395 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59c9a037-78c1-4c47-8c5d-b8491959c657-utilities" (OuterVolumeSpecName: "utilities") pod "59c9a037-78c1-4c47-8c5d-b8491959c657" (UID: "59c9a037-78c1-4c47-8c5d-b8491959c657"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:28:58 crc kubenswrapper[4689]: I1210 12:28:58.781426 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59c9a037-78c1-4c47-8c5d-b8491959c657-kube-api-access-rmf6g" (OuterVolumeSpecName: "kube-api-access-rmf6g") pod "59c9a037-78c1-4c47-8c5d-b8491959c657" (UID: "59c9a037-78c1-4c47-8c5d-b8491959c657"). InnerVolumeSpecName "kube-api-access-rmf6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:28:58 crc kubenswrapper[4689]: I1210 12:28:58.876475 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59c9a037-78c1-4c47-8c5d-b8491959c657-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 12:28:58 crc kubenswrapper[4689]: I1210 12:28:58.876534 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmf6g\" (UniqueName: \"kubernetes.io/projected/59c9a037-78c1-4c47-8c5d-b8491959c657-kube-api-access-rmf6g\") on node \"crc\" DevicePath \"\"" Dec 10 12:28:58 crc kubenswrapper[4689]: I1210 12:28:58.883230 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59c9a037-78c1-4c47-8c5d-b8491959c657-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "59c9a037-78c1-4c47-8c5d-b8491959c657" (UID: "59c9a037-78c1-4c47-8c5d-b8491959c657"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:28:58 crc kubenswrapper[4689]: I1210 12:28:58.953563 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wghvl" event={"ID":"6b4e153d-87c2-4ebb-b47c-77f12331ab68","Type":"ContainerStarted","Data":"5ab21586b78e98c93cde3fbd1e809d2a78458a28a08ddb2f6b786ae3d974b9e5"} Dec 10 12:28:58 crc kubenswrapper[4689]: I1210 12:28:58.954016 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wghvl" Dec 10 12:28:58 crc kubenswrapper[4689]: I1210 12:28:58.955288 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-97b9n" event={"ID":"b3a78d11-7f23-4c35-aee3-0b2a7a19a041","Type":"ContainerStarted","Data":"6399abdafb338649fda414f40037d3e8eaefa505dbdc49a526f73bccbb2ab5b0"} Dec 10 12:28:58 crc kubenswrapper[4689]: I1210 12:28:58.956952 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-sf9vn" event={"ID":"dfc59f59-cc82-4d61-ac2b-e9dbf2b0f5fd","Type":"ContainerStarted","Data":"101986bfef619a51efa3797536e7b5c6d7ac214f46711b7582ad0ea85d65eccf"} Dec 10 12:28:58 crc kubenswrapper[4689]: I1210 12:28:58.957826 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-mvj78" event={"ID":"7f919f52-0709-4db5-8158-8be0da507d54","Type":"ContainerStarted","Data":"058adcbd9e94722c73ef5e776e953b97ba01db26c9bba13226587275d44fcbc4"} Dec 10 12:28:58 crc kubenswrapper[4689]: I1210 12:28:58.958030 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-mvj78" Dec 10 12:28:58 crc kubenswrapper[4689]: I1210 12:28:58.959558 4689 generic.go:334] "Generic (PLEG): container finished" podID="59c9a037-78c1-4c47-8c5d-b8491959c657" containerID="cc478a851a220595a645503788d6630082a6427c338ef32f4ab24ec171ff4977" exitCode=0 Dec 10 12:28:58 crc kubenswrapper[4689]: I1210 12:28:58.959603 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sb99b" event={"ID":"59c9a037-78c1-4c47-8c5d-b8491959c657","Type":"ContainerDied","Data":"cc478a851a220595a645503788d6630082a6427c338ef32f4ab24ec171ff4977"} Dec 10 12:28:58 crc kubenswrapper[4689]: I1210 12:28:58.959737 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sb99b" event={"ID":"59c9a037-78c1-4c47-8c5d-b8491959c657","Type":"ContainerDied","Data":"8c8676701caa1fd42b7fb739e76a21900ec860f2a002dd7db5f019c0edec1b28"} Dec 10 12:28:58 crc kubenswrapper[4689]: I1210 12:28:58.959744 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sb99b" Dec 10 12:28:58 crc kubenswrapper[4689]: I1210 12:28:58.959766 4689 scope.go:117] "RemoveContainer" containerID="cc478a851a220595a645503788d6630082a6427c338ef32f4ab24ec171ff4977" Dec 10 12:28:58 crc kubenswrapper[4689]: I1210 12:28:58.979278 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59c9a037-78c1-4c47-8c5d-b8491959c657-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 12:28:58 crc kubenswrapper[4689]: I1210 12:28:58.980109 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wghvl" podStartSLOduration=2.885654001 podStartE2EDuration="4.980089399s" podCreationTimestamp="2025-12-10 12:28:54 +0000 UTC" firstStartedPulling="2025-12-10 12:28:56.129365287 +0000 UTC m=+803.917446435" lastFinishedPulling="2025-12-10 12:28:58.223800685 +0000 UTC m=+806.011881833" observedRunningTime="2025-12-10 12:28:58.978111099 +0000 UTC m=+806.766192247" watchObservedRunningTime="2025-12-10 12:28:58.980089399 +0000 UTC m=+806.768170537" Dec 10 12:28:58 crc kubenswrapper[4689]: I1210 12:28:58.982349 4689 scope.go:117] "RemoveContainer" containerID="0f9069dbfa14c3b3a244580d4a9f1cb2bc67693903aac705e389657459836dc0" Dec 10 12:28:59 crc kubenswrapper[4689]: I1210 12:28:59.003759 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-97b9n" podStartSLOduration=1.37166289 podStartE2EDuration="4.003737881s" podCreationTimestamp="2025-12-10 12:28:55 +0000 UTC" firstStartedPulling="2025-12-10 12:28:55.573319524 +0000 UTC m=+803.361400662" lastFinishedPulling="2025-12-10 12:28:58.205394505 +0000 UTC m=+805.993475653" observedRunningTime="2025-12-10 12:28:58.99850294 +0000 UTC m=+806.786584088" watchObservedRunningTime="2025-12-10 12:28:59.003737881 +0000 UTC m=+806.791819019" Dec 10 12:28:59 crc kubenswrapper[4689]: I1210 12:28:59.020719 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-mvj78" podStartSLOduration=2.143983353 podStartE2EDuration="5.020703826s" podCreationTimestamp="2025-12-10 12:28:54 +0000 UTC" firstStartedPulling="2025-12-10 12:28:55.327853231 +0000 UTC m=+803.115934369" lastFinishedPulling="2025-12-10 12:28:58.204573704 +0000 UTC m=+805.992654842" observedRunningTime="2025-12-10 12:28:59.018909951 +0000 UTC m=+806.806991119" watchObservedRunningTime="2025-12-10 12:28:59.020703826 +0000 UTC m=+806.808784954" Dec 10 12:28:59 crc kubenswrapper[4689]: I1210 12:28:59.036367 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sb99b"] Dec 10 12:28:59 crc kubenswrapper[4689]: I1210 12:28:59.041213 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sb99b"] Dec 10 12:28:59 crc kubenswrapper[4689]: I1210 12:28:59.057581 4689 scope.go:117] "RemoveContainer" containerID="45557321b36669e54522e9dfa98c81e1415c6775c5e8a08a57ed721c0ddbaec9" Dec 10 12:28:59 crc kubenswrapper[4689]: I1210 12:28:59.075814 4689 scope.go:117] "RemoveContainer" containerID="cc478a851a220595a645503788d6630082a6427c338ef32f4ab24ec171ff4977" Dec 10 12:28:59 crc kubenswrapper[4689]: E1210 12:28:59.076310 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc478a851a220595a645503788d6630082a6427c338ef32f4ab24ec171ff4977\": container with ID starting with cc478a851a220595a645503788d6630082a6427c338ef32f4ab24ec171ff4977 not found: ID does not exist" containerID="cc478a851a220595a645503788d6630082a6427c338ef32f4ab24ec171ff4977" Dec 10 12:28:59 crc kubenswrapper[4689]: I1210 12:28:59.076346 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc478a851a220595a645503788d6630082a6427c338ef32f4ab24ec171ff4977"} err="failed to get container status \"cc478a851a220595a645503788d6630082a6427c338ef32f4ab24ec171ff4977\": rpc error: code = NotFound desc = could not find container \"cc478a851a220595a645503788d6630082a6427c338ef32f4ab24ec171ff4977\": container with ID starting with cc478a851a220595a645503788d6630082a6427c338ef32f4ab24ec171ff4977 not found: ID does not exist" Dec 10 12:28:59 crc kubenswrapper[4689]: I1210 12:28:59.076374 4689 scope.go:117] "RemoveContainer" containerID="0f9069dbfa14c3b3a244580d4a9f1cb2bc67693903aac705e389657459836dc0" Dec 10 12:28:59 crc kubenswrapper[4689]: E1210 12:28:59.076778 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f9069dbfa14c3b3a244580d4a9f1cb2bc67693903aac705e389657459836dc0\": container with ID starting with 0f9069dbfa14c3b3a244580d4a9f1cb2bc67693903aac705e389657459836dc0 not found: ID does not exist" containerID="0f9069dbfa14c3b3a244580d4a9f1cb2bc67693903aac705e389657459836dc0" Dec 10 12:28:59 crc kubenswrapper[4689]: I1210 12:28:59.076805 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f9069dbfa14c3b3a244580d4a9f1cb2bc67693903aac705e389657459836dc0"} err="failed to get container status \"0f9069dbfa14c3b3a244580d4a9f1cb2bc67693903aac705e389657459836dc0\": rpc error: code = NotFound desc = could not find container \"0f9069dbfa14c3b3a244580d4a9f1cb2bc67693903aac705e389657459836dc0\": container with ID starting with 0f9069dbfa14c3b3a244580d4a9f1cb2bc67693903aac705e389657459836dc0 not found: ID does not exist" Dec 10 12:28:59 crc kubenswrapper[4689]: I1210 12:28:59.076822 4689 scope.go:117] "RemoveContainer" containerID="45557321b36669e54522e9dfa98c81e1415c6775c5e8a08a57ed721c0ddbaec9" Dec 10 12:28:59 crc kubenswrapper[4689]: E1210 12:28:59.077326 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45557321b36669e54522e9dfa98c81e1415c6775c5e8a08a57ed721c0ddbaec9\": container with ID starting with 45557321b36669e54522e9dfa98c81e1415c6775c5e8a08a57ed721c0ddbaec9 not found: ID does not exist" containerID="45557321b36669e54522e9dfa98c81e1415c6775c5e8a08a57ed721c0ddbaec9" Dec 10 12:28:59 crc kubenswrapper[4689]: I1210 12:28:59.077356 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45557321b36669e54522e9dfa98c81e1415c6775c5e8a08a57ed721c0ddbaec9"} err="failed to get container status \"45557321b36669e54522e9dfa98c81e1415c6775c5e8a08a57ed721c0ddbaec9\": rpc error: code = NotFound desc = could not find container \"45557321b36669e54522e9dfa98c81e1415c6775c5e8a08a57ed721c0ddbaec9\": container with ID starting with 45557321b36669e54522e9dfa98c81e1415c6775c5e8a08a57ed721c0ddbaec9 not found: ID does not exist" Dec 10 12:29:00 crc kubenswrapper[4689]: I1210 12:29:00.529777 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59c9a037-78c1-4c47-8c5d-b8491959c657" path="/var/lib/kubelet/pods/59c9a037-78c1-4c47-8c5d-b8491959c657/volumes" Dec 10 12:29:00 crc kubenswrapper[4689]: I1210 12:29:00.981543 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-sf9vn" event={"ID":"dfc59f59-cc82-4d61-ac2b-e9dbf2b0f5fd","Type":"ContainerStarted","Data":"8f0291720592391481fb4d4430f69a52f4fb31a830090d9707522d1daacc2b99"} Dec 10 12:29:01 crc kubenswrapper[4689]: I1210 12:29:01.022710 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-sf9vn" podStartSLOduration=2.317218908 podStartE2EDuration="7.022681481s" podCreationTimestamp="2025-12-10 12:28:54 +0000 UTC" firstStartedPulling="2025-12-10 12:28:55.481448054 +0000 UTC m=+803.269529192" lastFinishedPulling="2025-12-10 12:29:00.186910617 +0000 UTC m=+807.974991765" observedRunningTime="2025-12-10 12:29:01.006090715 +0000 UTC m=+808.794171893" watchObservedRunningTime="2025-12-10 12:29:01.022681481 +0000 UTC m=+808.810762659" Dec 10 12:29:05 crc kubenswrapper[4689]: I1210 12:29:05.329437 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-mvj78" Dec 10 12:29:05 crc kubenswrapper[4689]: I1210 12:29:05.605469 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6978598799-bkv7n" Dec 10 12:29:05 crc kubenswrapper[4689]: I1210 12:29:05.605558 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6978598799-bkv7n" Dec 10 12:29:05 crc kubenswrapper[4689]: I1210 12:29:05.616082 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6978598799-bkv7n" Dec 10 12:29:06 crc kubenswrapper[4689]: I1210 12:29:06.025499 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6978598799-bkv7n" Dec 10 12:29:06 crc kubenswrapper[4689]: I1210 12:29:06.097386 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-9s8ps"] Dec 10 12:29:15 crc kubenswrapper[4689]: I1210 12:29:15.893812 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wghvl" Dec 10 12:29:30 crc kubenswrapper[4689]: I1210 12:29:30.484881 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd"] Dec 10 12:29:30 crc kubenswrapper[4689]: E1210 12:29:30.485639 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59c9a037-78c1-4c47-8c5d-b8491959c657" containerName="extract-utilities" Dec 10 12:29:30 crc kubenswrapper[4689]: I1210 12:29:30.485650 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="59c9a037-78c1-4c47-8c5d-b8491959c657" containerName="extract-utilities" Dec 10 12:29:30 crc kubenswrapper[4689]: E1210 12:29:30.485670 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59c9a037-78c1-4c47-8c5d-b8491959c657" containerName="extract-content" Dec 10 12:29:30 crc kubenswrapper[4689]: I1210 12:29:30.485676 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="59c9a037-78c1-4c47-8c5d-b8491959c657" containerName="extract-content" Dec 10 12:29:30 crc kubenswrapper[4689]: E1210 12:29:30.485685 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59c9a037-78c1-4c47-8c5d-b8491959c657" containerName="registry-server" Dec 10 12:29:30 crc kubenswrapper[4689]: I1210 12:29:30.485691 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="59c9a037-78c1-4c47-8c5d-b8491959c657" containerName="registry-server" Dec 10 12:29:30 crc kubenswrapper[4689]: I1210 12:29:30.485795 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="59c9a037-78c1-4c47-8c5d-b8491959c657" containerName="registry-server" Dec 10 12:29:30 crc kubenswrapper[4689]: I1210 12:29:30.486548 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd" Dec 10 12:29:30 crc kubenswrapper[4689]: I1210 12:29:30.489253 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 10 12:29:30 crc kubenswrapper[4689]: I1210 12:29:30.507868 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd"] Dec 10 12:29:30 crc kubenswrapper[4689]: I1210 12:29:30.535673 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc2pn\" (UniqueName: \"kubernetes.io/projected/c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb-kube-api-access-zc2pn\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd\" (UID: \"c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd" Dec 10 12:29:30 crc kubenswrapper[4689]: I1210 12:29:30.535730 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd\" (UID: \"c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd" Dec 10 12:29:30 crc kubenswrapper[4689]: I1210 12:29:30.535815 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd\" (UID: \"c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd" Dec 10 12:29:30 crc kubenswrapper[4689]: I1210 12:29:30.637550 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc2pn\" (UniqueName: \"kubernetes.io/projected/c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb-kube-api-access-zc2pn\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd\" (UID: \"c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd" Dec 10 12:29:30 crc kubenswrapper[4689]: I1210 12:29:30.637612 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd\" (UID: \"c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd" Dec 10 12:29:30 crc kubenswrapper[4689]: I1210 12:29:30.637694 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd\" (UID: \"c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd" Dec 10 12:29:30 crc kubenswrapper[4689]: I1210 12:29:30.638399 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd\" (UID: \"c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd" Dec 10 12:29:30 crc kubenswrapper[4689]: I1210 12:29:30.638438 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd\" (UID: \"c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd" Dec 10 12:29:30 crc kubenswrapper[4689]: I1210 12:29:30.654569 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc2pn\" (UniqueName: \"kubernetes.io/projected/c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb-kube-api-access-zc2pn\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd\" (UID: \"c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd" Dec 10 12:29:30 crc kubenswrapper[4689]: I1210 12:29:30.809445 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd" Dec 10 12:29:31 crc kubenswrapper[4689]: I1210 12:29:31.097473 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd"] Dec 10 12:29:31 crc kubenswrapper[4689]: W1210 12:29:31.110301 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc79ff6a1_8c4b_4d40_af83_b6aaa022f1fb.slice/crio-4b41ece2ad396d9c6b27c26f2b8334aeef103890a35def802c87e6896c2ecbde WatchSource:0}: Error finding container 4b41ece2ad396d9c6b27c26f2b8334aeef103890a35def802c87e6896c2ecbde: Status 404 returned error can't find the container with id 4b41ece2ad396d9c6b27c26f2b8334aeef103890a35def802c87e6896c2ecbde Dec 10 12:29:31 crc kubenswrapper[4689]: I1210 12:29:31.163987 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-9s8ps" podUID="ee767cde-d698-4c01-b221-33c158999e60" containerName="console" containerID="cri-o://d6b376f542a0c4b8bf88f46122cb515ce1f7e0abf62053f50a340c26873a392f" gracePeriod=15 Dec 10 12:29:31 crc kubenswrapper[4689]: I1210 12:29:31.195658 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd" event={"ID":"c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb","Type":"ContainerStarted","Data":"4b41ece2ad396d9c6b27c26f2b8334aeef103890a35def802c87e6896c2ecbde"} Dec 10 12:29:31 crc kubenswrapper[4689]: I1210 12:29:31.522742 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9s8ps_ee767cde-d698-4c01-b221-33c158999e60/console/0.log" Dec 10 12:29:31 crc kubenswrapper[4689]: I1210 12:29:31.523167 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9s8ps" Dec 10 12:29:31 crc kubenswrapper[4689]: I1210 12:29:31.547805 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hvcm\" (UniqueName: \"kubernetes.io/projected/ee767cde-d698-4c01-b221-33c158999e60-kube-api-access-4hvcm\") pod \"ee767cde-d698-4c01-b221-33c158999e60\" (UID: \"ee767cde-d698-4c01-b221-33c158999e60\") " Dec 10 12:29:31 crc kubenswrapper[4689]: I1210 12:29:31.547916 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ee767cde-d698-4c01-b221-33c158999e60-service-ca\") pod \"ee767cde-d698-4c01-b221-33c158999e60\" (UID: \"ee767cde-d698-4c01-b221-33c158999e60\") " Dec 10 12:29:31 crc kubenswrapper[4689]: I1210 12:29:31.548021 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee767cde-d698-4c01-b221-33c158999e60-trusted-ca-bundle\") pod \"ee767cde-d698-4c01-b221-33c158999e60\" (UID: \"ee767cde-d698-4c01-b221-33c158999e60\") " Dec 10 12:29:31 crc kubenswrapper[4689]: I1210 12:29:31.548073 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee767cde-d698-4c01-b221-33c158999e60-console-serving-cert\") pod \"ee767cde-d698-4c01-b221-33c158999e60\" (UID: \"ee767cde-d698-4c01-b221-33c158999e60\") " Dec 10 12:29:31 crc kubenswrapper[4689]: I1210 12:29:31.548159 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ee767cde-d698-4c01-b221-33c158999e60-oauth-serving-cert\") pod \"ee767cde-d698-4c01-b221-33c158999e60\" (UID: \"ee767cde-d698-4c01-b221-33c158999e60\") " Dec 10 12:29:31 crc kubenswrapper[4689]: I1210 12:29:31.548219 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ee767cde-d698-4c01-b221-33c158999e60-console-oauth-config\") pod \"ee767cde-d698-4c01-b221-33c158999e60\" (UID: \"ee767cde-d698-4c01-b221-33c158999e60\") " Dec 10 12:29:31 crc kubenswrapper[4689]: I1210 12:29:31.548304 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ee767cde-d698-4c01-b221-33c158999e60-console-config\") pod \"ee767cde-d698-4c01-b221-33c158999e60\" (UID: \"ee767cde-d698-4c01-b221-33c158999e60\") " Dec 10 12:29:31 crc kubenswrapper[4689]: I1210 12:29:31.550026 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee767cde-d698-4c01-b221-33c158999e60-console-config" (OuterVolumeSpecName: "console-config") pod "ee767cde-d698-4c01-b221-33c158999e60" (UID: "ee767cde-d698-4c01-b221-33c158999e60"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:29:31 crc kubenswrapper[4689]: I1210 12:29:31.550183 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee767cde-d698-4c01-b221-33c158999e60-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ee767cde-d698-4c01-b221-33c158999e60" (UID: "ee767cde-d698-4c01-b221-33c158999e60"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:29:31 crc kubenswrapper[4689]: I1210 12:29:31.551513 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee767cde-d698-4c01-b221-33c158999e60-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ee767cde-d698-4c01-b221-33c158999e60" (UID: "ee767cde-d698-4c01-b221-33c158999e60"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:29:31 crc kubenswrapper[4689]: I1210 12:29:31.552220 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee767cde-d698-4c01-b221-33c158999e60-service-ca" (OuterVolumeSpecName: "service-ca") pod "ee767cde-d698-4c01-b221-33c158999e60" (UID: "ee767cde-d698-4c01-b221-33c158999e60"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:29:31 crc kubenswrapper[4689]: I1210 12:29:31.556744 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee767cde-d698-4c01-b221-33c158999e60-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ee767cde-d698-4c01-b221-33c158999e60" (UID: "ee767cde-d698-4c01-b221-33c158999e60"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:29:31 crc kubenswrapper[4689]: I1210 12:29:31.557192 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee767cde-d698-4c01-b221-33c158999e60-kube-api-access-4hvcm" (OuterVolumeSpecName: "kube-api-access-4hvcm") pod "ee767cde-d698-4c01-b221-33c158999e60" (UID: "ee767cde-d698-4c01-b221-33c158999e60"). InnerVolumeSpecName "kube-api-access-4hvcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:29:31 crc kubenswrapper[4689]: I1210 12:29:31.557253 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee767cde-d698-4c01-b221-33c158999e60-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ee767cde-d698-4c01-b221-33c158999e60" (UID: "ee767cde-d698-4c01-b221-33c158999e60"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:29:31 crc kubenswrapper[4689]: I1210 12:29:31.649829 4689 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ee767cde-d698-4c01-b221-33c158999e60-console-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:29:31 crc kubenswrapper[4689]: I1210 12:29:31.649859 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hvcm\" (UniqueName: \"kubernetes.io/projected/ee767cde-d698-4c01-b221-33c158999e60-kube-api-access-4hvcm\") on node \"crc\" DevicePath \"\"" Dec 10 12:29:31 crc kubenswrapper[4689]: I1210 12:29:31.649870 4689 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ee767cde-d698-4c01-b221-33c158999e60-service-ca\") on node \"crc\" DevicePath \"\"" Dec 10 12:29:31 crc kubenswrapper[4689]: I1210 12:29:31.649878 4689 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee767cde-d698-4c01-b221-33c158999e60-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:29:31 crc kubenswrapper[4689]: I1210 12:29:31.649886 4689 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee767cde-d698-4c01-b221-33c158999e60-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:29:31 crc kubenswrapper[4689]: I1210 12:29:31.649894 4689 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ee767cde-d698-4c01-b221-33c158999e60-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 12:29:31 crc kubenswrapper[4689]: I1210 12:29:31.649901 4689 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ee767cde-d698-4c01-b221-33c158999e60-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:29:32 crc kubenswrapper[4689]: I1210 12:29:32.205134 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9s8ps_ee767cde-d698-4c01-b221-33c158999e60/console/0.log" Dec 10 12:29:32 crc kubenswrapper[4689]: I1210 12:29:32.205214 4689 generic.go:334] "Generic (PLEG): container finished" podID="ee767cde-d698-4c01-b221-33c158999e60" containerID="d6b376f542a0c4b8bf88f46122cb515ce1f7e0abf62053f50a340c26873a392f" exitCode=2 Dec 10 12:29:32 crc kubenswrapper[4689]: I1210 12:29:32.205266 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9s8ps" Dec 10 12:29:32 crc kubenswrapper[4689]: I1210 12:29:32.205308 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9s8ps" event={"ID":"ee767cde-d698-4c01-b221-33c158999e60","Type":"ContainerDied","Data":"d6b376f542a0c4b8bf88f46122cb515ce1f7e0abf62053f50a340c26873a392f"} Dec 10 12:29:32 crc kubenswrapper[4689]: I1210 12:29:32.205345 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9s8ps" event={"ID":"ee767cde-d698-4c01-b221-33c158999e60","Type":"ContainerDied","Data":"79e906509dbfcb43df4d01e4cfc0cb9a93910ebd6d7c5e41734275fed0a2a442"} Dec 10 12:29:32 crc kubenswrapper[4689]: I1210 12:29:32.205373 4689 scope.go:117] "RemoveContainer" containerID="d6b376f542a0c4b8bf88f46122cb515ce1f7e0abf62053f50a340c26873a392f" Dec 10 12:29:32 crc kubenswrapper[4689]: I1210 12:29:32.209462 4689 generic.go:334] "Generic (PLEG): container finished" podID="c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb" containerID="df26edd320aa3b62e8a3cac5425a6bca8d93e44da9ca6f2ebfe994b57c848609" exitCode=0 Dec 10 12:29:32 crc kubenswrapper[4689]: I1210 12:29:32.209517 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd" event={"ID":"c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb","Type":"ContainerDied","Data":"df26edd320aa3b62e8a3cac5425a6bca8d93e44da9ca6f2ebfe994b57c848609"} Dec 10 12:29:32 crc kubenswrapper[4689]: I1210 12:29:32.238105 4689 scope.go:117] "RemoveContainer" containerID="d6b376f542a0c4b8bf88f46122cb515ce1f7e0abf62053f50a340c26873a392f" Dec 10 12:29:32 crc kubenswrapper[4689]: E1210 12:29:32.239005 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6b376f542a0c4b8bf88f46122cb515ce1f7e0abf62053f50a340c26873a392f\": container with ID starting with d6b376f542a0c4b8bf88f46122cb515ce1f7e0abf62053f50a340c26873a392f not found: ID does not exist" containerID="d6b376f542a0c4b8bf88f46122cb515ce1f7e0abf62053f50a340c26873a392f" Dec 10 12:29:32 crc kubenswrapper[4689]: I1210 12:29:32.239057 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6b376f542a0c4b8bf88f46122cb515ce1f7e0abf62053f50a340c26873a392f"} err="failed to get container status \"d6b376f542a0c4b8bf88f46122cb515ce1f7e0abf62053f50a340c26873a392f\": rpc error: code = NotFound desc = could not find container \"d6b376f542a0c4b8bf88f46122cb515ce1f7e0abf62053f50a340c26873a392f\": container with ID starting with d6b376f542a0c4b8bf88f46122cb515ce1f7e0abf62053f50a340c26873a392f not found: ID does not exist" Dec 10 12:29:32 crc kubenswrapper[4689]: I1210 12:29:32.245049 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-9s8ps"] Dec 10 12:29:32 crc kubenswrapper[4689]: I1210 12:29:32.249333 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-9s8ps"] Dec 10 12:29:32 crc kubenswrapper[4689]: I1210 12:29:32.508224 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee767cde-d698-4c01-b221-33c158999e60" path="/var/lib/kubelet/pods/ee767cde-d698-4c01-b221-33c158999e60/volumes" Dec 10 12:29:34 crc kubenswrapper[4689]: I1210 12:29:34.774199 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8tprt"] Dec 10 12:29:34 crc kubenswrapper[4689]: E1210 12:29:34.774908 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee767cde-d698-4c01-b221-33c158999e60" containerName="console" Dec 10 12:29:34 crc kubenswrapper[4689]: I1210 12:29:34.774929 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee767cde-d698-4c01-b221-33c158999e60" containerName="console" Dec 10 12:29:34 crc kubenswrapper[4689]: I1210 12:29:34.775193 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee767cde-d698-4c01-b221-33c158999e60" containerName="console" Dec 10 12:29:34 crc kubenswrapper[4689]: I1210 12:29:34.781939 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8tprt" Dec 10 12:29:34 crc kubenswrapper[4689]: I1210 12:29:34.791239 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8tprt"] Dec 10 12:29:34 crc kubenswrapper[4689]: I1210 12:29:34.793853 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5dp7\" (UniqueName: \"kubernetes.io/projected/f6717b2d-79dd-4da5-9f43-12b9ecd31682-kube-api-access-f5dp7\") pod \"community-operators-8tprt\" (UID: \"f6717b2d-79dd-4da5-9f43-12b9ecd31682\") " pod="openshift-marketplace/community-operators-8tprt" Dec 10 12:29:34 crc kubenswrapper[4689]: I1210 12:29:34.793955 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6717b2d-79dd-4da5-9f43-12b9ecd31682-catalog-content\") pod \"community-operators-8tprt\" (UID: \"f6717b2d-79dd-4da5-9f43-12b9ecd31682\") " pod="openshift-marketplace/community-operators-8tprt" Dec 10 12:29:34 crc kubenswrapper[4689]: I1210 12:29:34.794165 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6717b2d-79dd-4da5-9f43-12b9ecd31682-utilities\") pod \"community-operators-8tprt\" (UID: \"f6717b2d-79dd-4da5-9f43-12b9ecd31682\") " pod="openshift-marketplace/community-operators-8tprt" Dec 10 12:29:34 crc kubenswrapper[4689]: I1210 12:29:34.895864 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5dp7\" (UniqueName: \"kubernetes.io/projected/f6717b2d-79dd-4da5-9f43-12b9ecd31682-kube-api-access-f5dp7\") pod \"community-operators-8tprt\" (UID: \"f6717b2d-79dd-4da5-9f43-12b9ecd31682\") " pod="openshift-marketplace/community-operators-8tprt" Dec 10 12:29:34 crc kubenswrapper[4689]: I1210 12:29:34.896089 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6717b2d-79dd-4da5-9f43-12b9ecd31682-catalog-content\") pod \"community-operators-8tprt\" (UID: \"f6717b2d-79dd-4da5-9f43-12b9ecd31682\") " pod="openshift-marketplace/community-operators-8tprt" Dec 10 12:29:34 crc kubenswrapper[4689]: I1210 12:29:34.896291 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6717b2d-79dd-4da5-9f43-12b9ecd31682-utilities\") pod \"community-operators-8tprt\" (UID: \"f6717b2d-79dd-4da5-9f43-12b9ecd31682\") " pod="openshift-marketplace/community-operators-8tprt" Dec 10 12:29:34 crc kubenswrapper[4689]: I1210 12:29:34.896909 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6717b2d-79dd-4da5-9f43-12b9ecd31682-utilities\") pod \"community-operators-8tprt\" (UID: \"f6717b2d-79dd-4da5-9f43-12b9ecd31682\") " pod="openshift-marketplace/community-operators-8tprt" Dec 10 12:29:34 crc kubenswrapper[4689]: I1210 12:29:34.896907 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6717b2d-79dd-4da5-9f43-12b9ecd31682-catalog-content\") pod \"community-operators-8tprt\" (UID: \"f6717b2d-79dd-4da5-9f43-12b9ecd31682\") " pod="openshift-marketplace/community-operators-8tprt" Dec 10 12:29:34 crc kubenswrapper[4689]: I1210 12:29:34.934004 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5dp7\" (UniqueName: \"kubernetes.io/projected/f6717b2d-79dd-4da5-9f43-12b9ecd31682-kube-api-access-f5dp7\") pod \"community-operators-8tprt\" (UID: \"f6717b2d-79dd-4da5-9f43-12b9ecd31682\") " pod="openshift-marketplace/community-operators-8tprt" Dec 10 12:29:35 crc kubenswrapper[4689]: I1210 12:29:35.115626 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8tprt" Dec 10 12:29:35 crc kubenswrapper[4689]: I1210 12:29:35.253792 4689 generic.go:334] "Generic (PLEG): container finished" podID="c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb" containerID="a7a703b9174f2f611aec1afcb35638016e3be27d07147edc2224e03be34543f2" exitCode=0 Dec 10 12:29:35 crc kubenswrapper[4689]: I1210 12:29:35.253835 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd" event={"ID":"c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb","Type":"ContainerDied","Data":"a7a703b9174f2f611aec1afcb35638016e3be27d07147edc2224e03be34543f2"} Dec 10 12:29:35 crc kubenswrapper[4689]: I1210 12:29:35.382100 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8tprt"] Dec 10 12:29:36 crc kubenswrapper[4689]: I1210 12:29:36.265410 4689 generic.go:334] "Generic (PLEG): container finished" podID="c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb" containerID="dc86f5fcf8aa32f137e781e6931862339ffe41121ec96cb299448880937f416f" exitCode=0 Dec 10 12:29:36 crc kubenswrapper[4689]: I1210 12:29:36.265501 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd" event={"ID":"c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb","Type":"ContainerDied","Data":"dc86f5fcf8aa32f137e781e6931862339ffe41121ec96cb299448880937f416f"} Dec 10 12:29:36 crc kubenswrapper[4689]: I1210 12:29:36.268585 4689 generic.go:334] "Generic (PLEG): container finished" podID="f6717b2d-79dd-4da5-9f43-12b9ecd31682" containerID="9a9333146d67c606d1151a69a480a20394aaad42c344bde95b769f23847379b6" exitCode=0 Dec 10 12:29:36 crc kubenswrapper[4689]: I1210 12:29:36.268630 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8tprt" event={"ID":"f6717b2d-79dd-4da5-9f43-12b9ecd31682","Type":"ContainerDied","Data":"9a9333146d67c606d1151a69a480a20394aaad42c344bde95b769f23847379b6"} Dec 10 12:29:36 crc kubenswrapper[4689]: I1210 12:29:36.268656 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8tprt" event={"ID":"f6717b2d-79dd-4da5-9f43-12b9ecd31682","Type":"ContainerStarted","Data":"e34a69366c0b97f62799b83a5b892b543ff6a85001570c258300096661debe3c"} Dec 10 12:29:37 crc kubenswrapper[4689]: I1210 12:29:37.561784 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd" Dec 10 12:29:37 crc kubenswrapper[4689]: I1210 12:29:37.736302 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb-util\") pod \"c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb\" (UID: \"c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb\") " Dec 10 12:29:37 crc kubenswrapper[4689]: I1210 12:29:37.736367 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc2pn\" (UniqueName: \"kubernetes.io/projected/c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb-kube-api-access-zc2pn\") pod \"c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb\" (UID: \"c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb\") " Dec 10 12:29:37 crc kubenswrapper[4689]: I1210 12:29:37.736404 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb-bundle\") pod \"c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb\" (UID: \"c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb\") " Dec 10 12:29:37 crc kubenswrapper[4689]: I1210 12:29:37.737610 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb-bundle" (OuterVolumeSpecName: "bundle") pod "c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb" (UID: "c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:29:37 crc kubenswrapper[4689]: I1210 12:29:37.742161 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb-kube-api-access-zc2pn" (OuterVolumeSpecName: "kube-api-access-zc2pn") pod "c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb" (UID: "c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb"). InnerVolumeSpecName "kube-api-access-zc2pn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:29:37 crc kubenswrapper[4689]: I1210 12:29:37.771856 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb-util" (OuterVolumeSpecName: "util") pod "c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb" (UID: "c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:29:37 crc kubenswrapper[4689]: I1210 12:29:37.838277 4689 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb-util\") on node \"crc\" DevicePath \"\"" Dec 10 12:29:37 crc kubenswrapper[4689]: I1210 12:29:37.838328 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc2pn\" (UniqueName: \"kubernetes.io/projected/c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb-kube-api-access-zc2pn\") on node \"crc\" DevicePath \"\"" Dec 10 12:29:37 crc kubenswrapper[4689]: I1210 12:29:37.838352 4689 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:29:38 crc kubenswrapper[4689]: I1210 12:29:38.287282 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd" event={"ID":"c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb","Type":"ContainerDied","Data":"4b41ece2ad396d9c6b27c26f2b8334aeef103890a35def802c87e6896c2ecbde"} Dec 10 12:29:38 crc kubenswrapper[4689]: I1210 12:29:38.287355 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b41ece2ad396d9c6b27c26f2b8334aeef103890a35def802c87e6896c2ecbde" Dec 10 12:29:38 crc kubenswrapper[4689]: I1210 12:29:38.287307 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd" Dec 10 12:29:38 crc kubenswrapper[4689]: I1210 12:29:38.290309 4689 generic.go:334] "Generic (PLEG): container finished" podID="f6717b2d-79dd-4da5-9f43-12b9ecd31682" containerID="74ac3878ba7d58ded8fd8b8dd1afd1c6363eb93e3c90de11ea89af18aadee2c9" exitCode=0 Dec 10 12:29:38 crc kubenswrapper[4689]: I1210 12:29:38.290349 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8tprt" event={"ID":"f6717b2d-79dd-4da5-9f43-12b9ecd31682","Type":"ContainerDied","Data":"74ac3878ba7d58ded8fd8b8dd1afd1c6363eb93e3c90de11ea89af18aadee2c9"} Dec 10 12:29:39 crc kubenswrapper[4689]: I1210 12:29:39.300666 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8tprt" event={"ID":"f6717b2d-79dd-4da5-9f43-12b9ecd31682","Type":"ContainerStarted","Data":"c0d28143c06e8ddc62aaff9df9a0237bd4b375e4c1beda512122f394d60fb2e1"} Dec 10 12:29:39 crc kubenswrapper[4689]: I1210 12:29:39.329186 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8tprt" podStartSLOduration=2.853486686 podStartE2EDuration="5.329161883s" podCreationTimestamp="2025-12-10 12:29:34 +0000 UTC" firstStartedPulling="2025-12-10 12:29:36.270853956 +0000 UTC m=+844.058935114" lastFinishedPulling="2025-12-10 12:29:38.746529133 +0000 UTC m=+846.534610311" observedRunningTime="2025-12-10 12:29:39.328458655 +0000 UTC m=+847.116539833" watchObservedRunningTime="2025-12-10 12:29:39.329161883 +0000 UTC m=+847.117243061" Dec 10 12:29:45 crc kubenswrapper[4689]: I1210 12:29:45.116146 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8tprt" Dec 10 12:29:45 crc kubenswrapper[4689]: I1210 12:29:45.117155 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8tprt" Dec 10 12:29:45 crc kubenswrapper[4689]: I1210 12:29:45.178013 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8tprt" Dec 10 12:29:45 crc kubenswrapper[4689]: I1210 12:29:45.395619 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8tprt" Dec 10 12:29:47 crc kubenswrapper[4689]: I1210 12:29:47.157807 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8tprt"] Dec 10 12:29:47 crc kubenswrapper[4689]: I1210 12:29:47.362640 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8tprt" podUID="f6717b2d-79dd-4da5-9f43-12b9ecd31682" containerName="registry-server" containerID="cri-o://c0d28143c06e8ddc62aaff9df9a0237bd4b375e4c1beda512122f394d60fb2e1" gracePeriod=2 Dec 10 12:29:48 crc kubenswrapper[4689]: I1210 12:29:48.369730 4689 generic.go:334] "Generic (PLEG): container finished" podID="f6717b2d-79dd-4da5-9f43-12b9ecd31682" containerID="c0d28143c06e8ddc62aaff9df9a0237bd4b375e4c1beda512122f394d60fb2e1" exitCode=0 Dec 10 12:29:48 crc kubenswrapper[4689]: I1210 12:29:48.369824 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8tprt" event={"ID":"f6717b2d-79dd-4da5-9f43-12b9ecd31682","Type":"ContainerDied","Data":"c0d28143c06e8ddc62aaff9df9a0237bd4b375e4c1beda512122f394d60fb2e1"} Dec 10 12:29:48 crc kubenswrapper[4689]: I1210 12:29:48.425329 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8tprt" Dec 10 12:29:48 crc kubenswrapper[4689]: I1210 12:29:48.518760 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6717b2d-79dd-4da5-9f43-12b9ecd31682-utilities\") pod \"f6717b2d-79dd-4da5-9f43-12b9ecd31682\" (UID: \"f6717b2d-79dd-4da5-9f43-12b9ecd31682\") " Dec 10 12:29:48 crc kubenswrapper[4689]: I1210 12:29:48.518858 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6717b2d-79dd-4da5-9f43-12b9ecd31682-catalog-content\") pod \"f6717b2d-79dd-4da5-9f43-12b9ecd31682\" (UID: \"f6717b2d-79dd-4da5-9f43-12b9ecd31682\") " Dec 10 12:29:48 crc kubenswrapper[4689]: I1210 12:29:48.518966 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5dp7\" (UniqueName: \"kubernetes.io/projected/f6717b2d-79dd-4da5-9f43-12b9ecd31682-kube-api-access-f5dp7\") pod \"f6717b2d-79dd-4da5-9f43-12b9ecd31682\" (UID: \"f6717b2d-79dd-4da5-9f43-12b9ecd31682\") " Dec 10 12:29:48 crc kubenswrapper[4689]: I1210 12:29:48.519650 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6717b2d-79dd-4da5-9f43-12b9ecd31682-utilities" (OuterVolumeSpecName: "utilities") pod "f6717b2d-79dd-4da5-9f43-12b9ecd31682" (UID: "f6717b2d-79dd-4da5-9f43-12b9ecd31682"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:29:48 crc kubenswrapper[4689]: I1210 12:29:48.535206 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6717b2d-79dd-4da5-9f43-12b9ecd31682-kube-api-access-f5dp7" (OuterVolumeSpecName: "kube-api-access-f5dp7") pod "f6717b2d-79dd-4da5-9f43-12b9ecd31682" (UID: "f6717b2d-79dd-4da5-9f43-12b9ecd31682"). InnerVolumeSpecName "kube-api-access-f5dp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:29:48 crc kubenswrapper[4689]: I1210 12:29:48.574160 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6717b2d-79dd-4da5-9f43-12b9ecd31682-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6717b2d-79dd-4da5-9f43-12b9ecd31682" (UID: "f6717b2d-79dd-4da5-9f43-12b9ecd31682"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:29:48 crc kubenswrapper[4689]: I1210 12:29:48.621099 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6717b2d-79dd-4da5-9f43-12b9ecd31682-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 12:29:48 crc kubenswrapper[4689]: I1210 12:29:48.621128 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5dp7\" (UniqueName: \"kubernetes.io/projected/f6717b2d-79dd-4da5-9f43-12b9ecd31682-kube-api-access-f5dp7\") on node \"crc\" DevicePath \"\"" Dec 10 12:29:48 crc kubenswrapper[4689]: I1210 12:29:48.621139 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6717b2d-79dd-4da5-9f43-12b9ecd31682-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.393187 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8tprt" event={"ID":"f6717b2d-79dd-4da5-9f43-12b9ecd31682","Type":"ContainerDied","Data":"e34a69366c0b97f62799b83a5b892b543ff6a85001570c258300096661debe3c"} Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.393234 4689 scope.go:117] "RemoveContainer" containerID="c0d28143c06e8ddc62aaff9df9a0237bd4b375e4c1beda512122f394d60fb2e1" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.393365 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8tprt" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.409661 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-69d5767795-46q9h"] Dec 10 12:29:49 crc kubenswrapper[4689]: E1210 12:29:49.409865 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6717b2d-79dd-4da5-9f43-12b9ecd31682" containerName="registry-server" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.409881 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6717b2d-79dd-4da5-9f43-12b9ecd31682" containerName="registry-server" Dec 10 12:29:49 crc kubenswrapper[4689]: E1210 12:29:49.409892 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb" containerName="extract" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.409898 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb" containerName="extract" Dec 10 12:29:49 crc kubenswrapper[4689]: E1210 12:29:49.409908 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb" containerName="util" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.409914 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb" containerName="util" Dec 10 12:29:49 crc kubenswrapper[4689]: E1210 12:29:49.409923 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6717b2d-79dd-4da5-9f43-12b9ecd31682" containerName="extract-utilities" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.409929 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6717b2d-79dd-4da5-9f43-12b9ecd31682" containerName="extract-utilities" Dec 10 12:29:49 crc kubenswrapper[4689]: E1210 12:29:49.409938 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb" containerName="pull" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.409944 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb" containerName="pull" Dec 10 12:29:49 crc kubenswrapper[4689]: E1210 12:29:49.409956 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6717b2d-79dd-4da5-9f43-12b9ecd31682" containerName="extract-content" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.409963 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6717b2d-79dd-4da5-9f43-12b9ecd31682" containerName="extract-content" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.410065 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6717b2d-79dd-4da5-9f43-12b9ecd31682" containerName="registry-server" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.410076 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb" containerName="extract" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.410414 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-69d5767795-46q9h" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.413917 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.414029 4689 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.415841 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.416015 4689 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-28cgq" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.418453 4689 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.420728 4689 scope.go:117] "RemoveContainer" containerID="74ac3878ba7d58ded8fd8b8dd1afd1c6363eb93e3c90de11ea89af18aadee2c9" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.429965 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-69d5767795-46q9h"] Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.464214 4689 scope.go:117] "RemoveContainer" containerID="9a9333146d67c606d1151a69a480a20394aaad42c344bde95b769f23847379b6" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.488444 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8tprt"] Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.515926 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8tprt"] Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.531716 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c50fea7-15a9-4027-9f2f-14c3744c7533-webhook-cert\") pod \"metallb-operator-controller-manager-69d5767795-46q9h\" (UID: \"5c50fea7-15a9-4027-9f2f-14c3744c7533\") " pod="metallb-system/metallb-operator-controller-manager-69d5767795-46q9h" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.531786 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq4ff\" (UniqueName: \"kubernetes.io/projected/5c50fea7-15a9-4027-9f2f-14c3744c7533-kube-api-access-kq4ff\") pod \"metallb-operator-controller-manager-69d5767795-46q9h\" (UID: \"5c50fea7-15a9-4027-9f2f-14c3744c7533\") " pod="metallb-system/metallb-operator-controller-manager-69d5767795-46q9h" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.531809 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c50fea7-15a9-4027-9f2f-14c3744c7533-apiservice-cert\") pod \"metallb-operator-controller-manager-69d5767795-46q9h\" (UID: \"5c50fea7-15a9-4027-9f2f-14c3744c7533\") " pod="metallb-system/metallb-operator-controller-manager-69d5767795-46q9h" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.632995 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq4ff\" (UniqueName: \"kubernetes.io/projected/5c50fea7-15a9-4027-9f2f-14c3744c7533-kube-api-access-kq4ff\") pod \"metallb-operator-controller-manager-69d5767795-46q9h\" (UID: \"5c50fea7-15a9-4027-9f2f-14c3744c7533\") " pod="metallb-system/metallb-operator-controller-manager-69d5767795-46q9h" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.633043 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c50fea7-15a9-4027-9f2f-14c3744c7533-apiservice-cert\") pod \"metallb-operator-controller-manager-69d5767795-46q9h\" (UID: \"5c50fea7-15a9-4027-9f2f-14c3744c7533\") " pod="metallb-system/metallb-operator-controller-manager-69d5767795-46q9h" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.633092 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c50fea7-15a9-4027-9f2f-14c3744c7533-webhook-cert\") pod \"metallb-operator-controller-manager-69d5767795-46q9h\" (UID: \"5c50fea7-15a9-4027-9f2f-14c3744c7533\") " pod="metallb-system/metallb-operator-controller-manager-69d5767795-46q9h" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.648866 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c50fea7-15a9-4027-9f2f-14c3744c7533-webhook-cert\") pod \"metallb-operator-controller-manager-69d5767795-46q9h\" (UID: \"5c50fea7-15a9-4027-9f2f-14c3744c7533\") " pod="metallb-system/metallb-operator-controller-manager-69d5767795-46q9h" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.652376 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c50fea7-15a9-4027-9f2f-14c3744c7533-apiservice-cert\") pod \"metallb-operator-controller-manager-69d5767795-46q9h\" (UID: \"5c50fea7-15a9-4027-9f2f-14c3744c7533\") " pod="metallb-system/metallb-operator-controller-manager-69d5767795-46q9h" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.656232 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5d9dc5cf54-24c7k"] Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.656930 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5d9dc5cf54-24c7k" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.663415 4689 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-9b6m5" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.663646 4689 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.663788 4689 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.668610 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5d9dc5cf54-24c7k"] Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.673671 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq4ff\" (UniqueName: \"kubernetes.io/projected/5c50fea7-15a9-4027-9f2f-14c3744c7533-kube-api-access-kq4ff\") pod \"metallb-operator-controller-manager-69d5767795-46q9h\" (UID: \"5c50fea7-15a9-4027-9f2f-14c3744c7533\") " pod="metallb-system/metallb-operator-controller-manager-69d5767795-46q9h" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.725641 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-69d5767795-46q9h" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.734113 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9d6f0612-90c7-4c71-a751-850ab873444a-apiservice-cert\") pod \"metallb-operator-webhook-server-5d9dc5cf54-24c7k\" (UID: \"9d6f0612-90c7-4c71-a751-850ab873444a\") " pod="metallb-system/metallb-operator-webhook-server-5d9dc5cf54-24c7k" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.734167 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg9rz\" (UniqueName: \"kubernetes.io/projected/9d6f0612-90c7-4c71-a751-850ab873444a-kube-api-access-hg9rz\") pod \"metallb-operator-webhook-server-5d9dc5cf54-24c7k\" (UID: \"9d6f0612-90c7-4c71-a751-850ab873444a\") " pod="metallb-system/metallb-operator-webhook-server-5d9dc5cf54-24c7k" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.734356 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9d6f0612-90c7-4c71-a751-850ab873444a-webhook-cert\") pod \"metallb-operator-webhook-server-5d9dc5cf54-24c7k\" (UID: \"9d6f0612-90c7-4c71-a751-850ab873444a\") " pod="metallb-system/metallb-operator-webhook-server-5d9dc5cf54-24c7k" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.835320 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9d6f0612-90c7-4c71-a751-850ab873444a-webhook-cert\") pod \"metallb-operator-webhook-server-5d9dc5cf54-24c7k\" (UID: \"9d6f0612-90c7-4c71-a751-850ab873444a\") " pod="metallb-system/metallb-operator-webhook-server-5d9dc5cf54-24c7k" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.835608 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9d6f0612-90c7-4c71-a751-850ab873444a-apiservice-cert\") pod \"metallb-operator-webhook-server-5d9dc5cf54-24c7k\" (UID: \"9d6f0612-90c7-4c71-a751-850ab873444a\") " pod="metallb-system/metallb-operator-webhook-server-5d9dc5cf54-24c7k" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.835630 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg9rz\" (UniqueName: \"kubernetes.io/projected/9d6f0612-90c7-4c71-a751-850ab873444a-kube-api-access-hg9rz\") pod \"metallb-operator-webhook-server-5d9dc5cf54-24c7k\" (UID: \"9d6f0612-90c7-4c71-a751-850ab873444a\") " pod="metallb-system/metallb-operator-webhook-server-5d9dc5cf54-24c7k" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.839137 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9d6f0612-90c7-4c71-a751-850ab873444a-webhook-cert\") pod \"metallb-operator-webhook-server-5d9dc5cf54-24c7k\" (UID: \"9d6f0612-90c7-4c71-a751-850ab873444a\") " pod="metallb-system/metallb-operator-webhook-server-5d9dc5cf54-24c7k" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.842607 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9d6f0612-90c7-4c71-a751-850ab873444a-apiservice-cert\") pod \"metallb-operator-webhook-server-5d9dc5cf54-24c7k\" (UID: \"9d6f0612-90c7-4c71-a751-850ab873444a\") " pod="metallb-system/metallb-operator-webhook-server-5d9dc5cf54-24c7k" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.861612 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg9rz\" (UniqueName: \"kubernetes.io/projected/9d6f0612-90c7-4c71-a751-850ab873444a-kube-api-access-hg9rz\") pod \"metallb-operator-webhook-server-5d9dc5cf54-24c7k\" (UID: \"9d6f0612-90c7-4c71-a751-850ab873444a\") " pod="metallb-system/metallb-operator-webhook-server-5d9dc5cf54-24c7k" Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.968723 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-69d5767795-46q9h"] Dec 10 12:29:49 crc kubenswrapper[4689]: I1210 12:29:49.997595 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5d9dc5cf54-24c7k" Dec 10 12:29:50 crc kubenswrapper[4689]: I1210 12:29:50.231789 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5d9dc5cf54-24c7k"] Dec 10 12:29:50 crc kubenswrapper[4689]: W1210 12:29:50.237089 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d6f0612_90c7_4c71_a751_850ab873444a.slice/crio-34a633612c3b3d4dbc5b838f133f7cdeb51678860b4f28ce1d5ae087be1cd620 WatchSource:0}: Error finding container 34a633612c3b3d4dbc5b838f133f7cdeb51678860b4f28ce1d5ae087be1cd620: Status 404 returned error can't find the container with id 34a633612c3b3d4dbc5b838f133f7cdeb51678860b4f28ce1d5ae087be1cd620 Dec 10 12:29:50 crc kubenswrapper[4689]: I1210 12:29:50.399382 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5d9dc5cf54-24c7k" event={"ID":"9d6f0612-90c7-4c71-a751-850ab873444a","Type":"ContainerStarted","Data":"34a633612c3b3d4dbc5b838f133f7cdeb51678860b4f28ce1d5ae087be1cd620"} Dec 10 12:29:50 crc kubenswrapper[4689]: I1210 12:29:50.402353 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-69d5767795-46q9h" event={"ID":"5c50fea7-15a9-4027-9f2f-14c3744c7533","Type":"ContainerStarted","Data":"bf899a7e5fa9e1e76f5dc4b924c27806da7fc8c2eb567064f939df3dbcc1344c"} Dec 10 12:29:50 crc kubenswrapper[4689]: I1210 12:29:50.508085 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6717b2d-79dd-4da5-9f43-12b9ecd31682" path="/var/lib/kubelet/pods/f6717b2d-79dd-4da5-9f43-12b9ecd31682/volumes" Dec 10 12:29:55 crc kubenswrapper[4689]: I1210 12:29:55.431674 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5d9dc5cf54-24c7k" event={"ID":"9d6f0612-90c7-4c71-a751-850ab873444a","Type":"ContainerStarted","Data":"e312ff476fc439c98847422f38b53287c6671c2313f9c55848c0e43b71147976"} Dec 10 12:29:55 crc kubenswrapper[4689]: I1210 12:29:55.432390 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5d9dc5cf54-24c7k" Dec 10 12:29:55 crc kubenswrapper[4689]: I1210 12:29:55.434489 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-69d5767795-46q9h" event={"ID":"5c50fea7-15a9-4027-9f2f-14c3744c7533","Type":"ContainerStarted","Data":"fbc8ce91958e69d218e12dd33774aefc301c7214c95ed6c38fd1ae9ac9e5a159"} Dec 10 12:29:55 crc kubenswrapper[4689]: I1210 12:29:55.434749 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-69d5767795-46q9h" Dec 10 12:29:55 crc kubenswrapper[4689]: I1210 12:29:55.453541 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5d9dc5cf54-24c7k" podStartSLOduration=2.317661151 podStartE2EDuration="6.45352149s" podCreationTimestamp="2025-12-10 12:29:49 +0000 UTC" firstStartedPulling="2025-12-10 12:29:50.24056718 +0000 UTC m=+858.028648338" lastFinishedPulling="2025-12-10 12:29:54.376427529 +0000 UTC m=+862.164508677" observedRunningTime="2025-12-10 12:29:55.449656903 +0000 UTC m=+863.237738051" watchObservedRunningTime="2025-12-10 12:29:55.45352149 +0000 UTC m=+863.241602638" Dec 10 12:29:55 crc kubenswrapper[4689]: I1210 12:29:55.476473 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-69d5767795-46q9h" podStartSLOduration=3.677657402 podStartE2EDuration="6.476446011s" podCreationTimestamp="2025-12-10 12:29:49 +0000 UTC" firstStartedPulling="2025-12-10 12:29:49.985440683 +0000 UTC m=+857.773521821" lastFinishedPulling="2025-12-10 12:29:52.784229272 +0000 UTC m=+860.572310430" observedRunningTime="2025-12-10 12:29:55.471643182 +0000 UTC m=+863.259724350" watchObservedRunningTime="2025-12-10 12:29:55.476446011 +0000 UTC m=+863.264527169" Dec 10 12:30:00 crc kubenswrapper[4689]: I1210 12:30:00.170908 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422830-hgkfv"] Dec 10 12:30:00 crc kubenswrapper[4689]: I1210 12:30:00.172115 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422830-hgkfv" Dec 10 12:30:00 crc kubenswrapper[4689]: I1210 12:30:00.175196 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 10 12:30:00 crc kubenswrapper[4689]: I1210 12:30:00.175491 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 10 12:30:00 crc kubenswrapper[4689]: I1210 12:30:00.193586 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422830-hgkfv"] Dec 10 12:30:00 crc kubenswrapper[4689]: I1210 12:30:00.280659 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/56a5da38-7fbb-464a-91ee-1d0293d3f656-secret-volume\") pod \"collect-profiles-29422830-hgkfv\" (UID: \"56a5da38-7fbb-464a-91ee-1d0293d3f656\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422830-hgkfv" Dec 10 12:30:00 crc kubenswrapper[4689]: I1210 12:30:00.280917 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56a5da38-7fbb-464a-91ee-1d0293d3f656-config-volume\") pod \"collect-profiles-29422830-hgkfv\" (UID: \"56a5da38-7fbb-464a-91ee-1d0293d3f656\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422830-hgkfv" Dec 10 12:30:00 crc kubenswrapper[4689]: I1210 12:30:00.281025 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzhjz\" (UniqueName: \"kubernetes.io/projected/56a5da38-7fbb-464a-91ee-1d0293d3f656-kube-api-access-xzhjz\") pod \"collect-profiles-29422830-hgkfv\" (UID: \"56a5da38-7fbb-464a-91ee-1d0293d3f656\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422830-hgkfv" Dec 10 12:30:00 crc kubenswrapper[4689]: I1210 12:30:00.381923 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/56a5da38-7fbb-464a-91ee-1d0293d3f656-secret-volume\") pod \"collect-profiles-29422830-hgkfv\" (UID: \"56a5da38-7fbb-464a-91ee-1d0293d3f656\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422830-hgkfv" Dec 10 12:30:00 crc kubenswrapper[4689]: I1210 12:30:00.382252 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56a5da38-7fbb-464a-91ee-1d0293d3f656-config-volume\") pod \"collect-profiles-29422830-hgkfv\" (UID: \"56a5da38-7fbb-464a-91ee-1d0293d3f656\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422830-hgkfv" Dec 10 12:30:00 crc kubenswrapper[4689]: I1210 12:30:00.382378 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzhjz\" (UniqueName: \"kubernetes.io/projected/56a5da38-7fbb-464a-91ee-1d0293d3f656-kube-api-access-xzhjz\") pod \"collect-profiles-29422830-hgkfv\" (UID: \"56a5da38-7fbb-464a-91ee-1d0293d3f656\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422830-hgkfv" Dec 10 12:30:00 crc kubenswrapper[4689]: I1210 12:30:00.383195 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56a5da38-7fbb-464a-91ee-1d0293d3f656-config-volume\") pod \"collect-profiles-29422830-hgkfv\" (UID: \"56a5da38-7fbb-464a-91ee-1d0293d3f656\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422830-hgkfv" Dec 10 12:30:00 crc kubenswrapper[4689]: I1210 12:30:00.387923 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/56a5da38-7fbb-464a-91ee-1d0293d3f656-secret-volume\") pod \"collect-profiles-29422830-hgkfv\" (UID: \"56a5da38-7fbb-464a-91ee-1d0293d3f656\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422830-hgkfv" Dec 10 12:30:00 crc kubenswrapper[4689]: I1210 12:30:00.406850 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzhjz\" (UniqueName: \"kubernetes.io/projected/56a5da38-7fbb-464a-91ee-1d0293d3f656-kube-api-access-xzhjz\") pod \"collect-profiles-29422830-hgkfv\" (UID: \"56a5da38-7fbb-464a-91ee-1d0293d3f656\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422830-hgkfv" Dec 10 12:30:00 crc kubenswrapper[4689]: I1210 12:30:00.487762 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422830-hgkfv" Dec 10 12:30:00 crc kubenswrapper[4689]: I1210 12:30:00.742372 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422830-hgkfv"] Dec 10 12:30:01 crc kubenswrapper[4689]: I1210 12:30:01.470401 4689 generic.go:334] "Generic (PLEG): container finished" podID="56a5da38-7fbb-464a-91ee-1d0293d3f656" containerID="6bc34bfa0f429f3f50da933ea953f1b4bb2bcdf2c3a5925f8b7e2b3cfeccc7e1" exitCode=0 Dec 10 12:30:01 crc kubenswrapper[4689]: I1210 12:30:01.470439 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422830-hgkfv" event={"ID":"56a5da38-7fbb-464a-91ee-1d0293d3f656","Type":"ContainerDied","Data":"6bc34bfa0f429f3f50da933ea953f1b4bb2bcdf2c3a5925f8b7e2b3cfeccc7e1"} Dec 10 12:30:01 crc kubenswrapper[4689]: I1210 12:30:01.470461 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422830-hgkfv" event={"ID":"56a5da38-7fbb-464a-91ee-1d0293d3f656","Type":"ContainerStarted","Data":"5cc45cb0e8d18ccc72b727d038ae7d2e50d3aa0b4605052833e96bfb6c8a4463"} Dec 10 12:30:02 crc kubenswrapper[4689]: I1210 12:30:02.743825 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422830-hgkfv" Dec 10 12:30:02 crc kubenswrapper[4689]: I1210 12:30:02.824519 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzhjz\" (UniqueName: \"kubernetes.io/projected/56a5da38-7fbb-464a-91ee-1d0293d3f656-kube-api-access-xzhjz\") pod \"56a5da38-7fbb-464a-91ee-1d0293d3f656\" (UID: \"56a5da38-7fbb-464a-91ee-1d0293d3f656\") " Dec 10 12:30:02 crc kubenswrapper[4689]: I1210 12:30:02.824576 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56a5da38-7fbb-464a-91ee-1d0293d3f656-config-volume\") pod \"56a5da38-7fbb-464a-91ee-1d0293d3f656\" (UID: \"56a5da38-7fbb-464a-91ee-1d0293d3f656\") " Dec 10 12:30:02 crc kubenswrapper[4689]: I1210 12:30:02.825294 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56a5da38-7fbb-464a-91ee-1d0293d3f656-config-volume" (OuterVolumeSpecName: "config-volume") pod "56a5da38-7fbb-464a-91ee-1d0293d3f656" (UID: "56a5da38-7fbb-464a-91ee-1d0293d3f656"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:30:02 crc kubenswrapper[4689]: I1210 12:30:02.825366 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/56a5da38-7fbb-464a-91ee-1d0293d3f656-secret-volume\") pod \"56a5da38-7fbb-464a-91ee-1d0293d3f656\" (UID: \"56a5da38-7fbb-464a-91ee-1d0293d3f656\") " Dec 10 12:30:02 crc kubenswrapper[4689]: I1210 12:30:02.825571 4689 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56a5da38-7fbb-464a-91ee-1d0293d3f656-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 12:30:02 crc kubenswrapper[4689]: I1210 12:30:02.829842 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56a5da38-7fbb-464a-91ee-1d0293d3f656-kube-api-access-xzhjz" (OuterVolumeSpecName: "kube-api-access-xzhjz") pod "56a5da38-7fbb-464a-91ee-1d0293d3f656" (UID: "56a5da38-7fbb-464a-91ee-1d0293d3f656"). InnerVolumeSpecName "kube-api-access-xzhjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:30:02 crc kubenswrapper[4689]: I1210 12:30:02.835165 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56a5da38-7fbb-464a-91ee-1d0293d3f656-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "56a5da38-7fbb-464a-91ee-1d0293d3f656" (UID: "56a5da38-7fbb-464a-91ee-1d0293d3f656"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:30:02 crc kubenswrapper[4689]: I1210 12:30:02.927494 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzhjz\" (UniqueName: \"kubernetes.io/projected/56a5da38-7fbb-464a-91ee-1d0293d3f656-kube-api-access-xzhjz\") on node \"crc\" DevicePath \"\"" Dec 10 12:30:02 crc kubenswrapper[4689]: I1210 12:30:02.927533 4689 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/56a5da38-7fbb-464a-91ee-1d0293d3f656-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 10 12:30:03 crc kubenswrapper[4689]: I1210 12:30:03.481439 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422830-hgkfv" event={"ID":"56a5da38-7fbb-464a-91ee-1d0293d3f656","Type":"ContainerDied","Data":"5cc45cb0e8d18ccc72b727d038ae7d2e50d3aa0b4605052833e96bfb6c8a4463"} Dec 10 12:30:03 crc kubenswrapper[4689]: I1210 12:30:03.481473 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cc45cb0e8d18ccc72b727d038ae7d2e50d3aa0b4605052833e96bfb6c8a4463" Dec 10 12:30:03 crc kubenswrapper[4689]: I1210 12:30:03.481482 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422830-hgkfv" Dec 10 12:30:10 crc kubenswrapper[4689]: I1210 12:30:10.003456 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5d9dc5cf54-24c7k" Dec 10 12:30:29 crc kubenswrapper[4689]: I1210 12:30:29.728948 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-69d5767795-46q9h" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.476427 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-9pghn"] Dec 10 12:30:30 crc kubenswrapper[4689]: E1210 12:30:30.477104 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56a5da38-7fbb-464a-91ee-1d0293d3f656" containerName="collect-profiles" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.477141 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="56a5da38-7fbb-464a-91ee-1d0293d3f656" containerName="collect-profiles" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.477441 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="56a5da38-7fbb-464a-91ee-1d0293d3f656" containerName="collect-profiles" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.478150 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9pghn" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.480159 4689 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-w4kl8" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.482148 4689 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.490776 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-9t496"] Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.494635 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9t496" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.496601 4689 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.499444 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.510203 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-9pghn"] Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.561814 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-xx5ts"] Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.562616 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-xx5ts" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.564429 4689 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.564662 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.564751 4689 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-s57ww" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.565768 4689 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.588740 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/3b6f82d6-ef74-4c0e-99ab-426f3b10334f-frr-sockets\") pod \"frr-k8s-9t496\" (UID: \"3b6f82d6-ef74-4c0e-99ab-426f3b10334f\") " pod="metallb-system/frr-k8s-9t496" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.588780 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da40a320-7203-4d3b-bc0e-c9eb09a07898-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-9pghn\" (UID: \"da40a320-7203-4d3b-bc0e-c9eb09a07898\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9pghn" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.588797 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg6v6\" (UniqueName: \"kubernetes.io/projected/3b6f82d6-ef74-4c0e-99ab-426f3b10334f-kube-api-access-kg6v6\") pod \"frr-k8s-9t496\" (UID: \"3b6f82d6-ef74-4c0e-99ab-426f3b10334f\") " pod="metallb-system/frr-k8s-9t496" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.589421 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/3b6f82d6-ef74-4c0e-99ab-426f3b10334f-reloader\") pod \"frr-k8s-9t496\" (UID: \"3b6f82d6-ef74-4c0e-99ab-426f3b10334f\") " pod="metallb-system/frr-k8s-9t496" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.589474 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3b6f82d6-ef74-4c0e-99ab-426f3b10334f-metrics-certs\") pod \"frr-k8s-9t496\" (UID: \"3b6f82d6-ef74-4c0e-99ab-426f3b10334f\") " pod="metallb-system/frr-k8s-9t496" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.589490 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/3b6f82d6-ef74-4c0e-99ab-426f3b10334f-frr-startup\") pod \"frr-k8s-9t496\" (UID: \"3b6f82d6-ef74-4c0e-99ab-426f3b10334f\") " pod="metallb-system/frr-k8s-9t496" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.589508 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/3b6f82d6-ef74-4c0e-99ab-426f3b10334f-metrics\") pod \"frr-k8s-9t496\" (UID: \"3b6f82d6-ef74-4c0e-99ab-426f3b10334f\") " pod="metallb-system/frr-k8s-9t496" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.589522 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/3b6f82d6-ef74-4c0e-99ab-426f3b10334f-frr-conf\") pod \"frr-k8s-9t496\" (UID: \"3b6f82d6-ef74-4c0e-99ab-426f3b10334f\") " pod="metallb-system/frr-k8s-9t496" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.589550 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkft8\" (UniqueName: \"kubernetes.io/projected/da40a320-7203-4d3b-bc0e-c9eb09a07898-kube-api-access-gkft8\") pod \"frr-k8s-webhook-server-7fcb986d4-9pghn\" (UID: \"da40a320-7203-4d3b-bc0e-c9eb09a07898\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9pghn" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.590126 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-fw4x8"] Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.590921 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-fw4x8" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.601459 4689 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.606034 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-fw4x8"] Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.690334 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8a0c114-08fb-4583-a9eb-09333023b0ed-metrics-certs\") pod \"speaker-xx5ts\" (UID: \"b8a0c114-08fb-4583-a9eb-09333023b0ed\") " pod="metallb-system/speaker-xx5ts" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.690400 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxklr\" (UniqueName: \"kubernetes.io/projected/b8a0c114-08fb-4583-a9eb-09333023b0ed-kube-api-access-fxklr\") pod \"speaker-xx5ts\" (UID: \"b8a0c114-08fb-4583-a9eb-09333023b0ed\") " pod="metallb-system/speaker-xx5ts" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.690561 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/3b6f82d6-ef74-4c0e-99ab-426f3b10334f-reloader\") pod \"frr-k8s-9t496\" (UID: \"3b6f82d6-ef74-4c0e-99ab-426f3b10334f\") " pod="metallb-system/frr-k8s-9t496" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.690658 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b8a0c114-08fb-4583-a9eb-09333023b0ed-metallb-excludel2\") pod \"speaker-xx5ts\" (UID: \"b8a0c114-08fb-4583-a9eb-09333023b0ed\") " pod="metallb-system/speaker-xx5ts" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.690725 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3b6f82d6-ef74-4c0e-99ab-426f3b10334f-metrics-certs\") pod \"frr-k8s-9t496\" (UID: \"3b6f82d6-ef74-4c0e-99ab-426f3b10334f\") " pod="metallb-system/frr-k8s-9t496" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.690809 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/3b6f82d6-ef74-4c0e-99ab-426f3b10334f-frr-startup\") pod \"frr-k8s-9t496\" (UID: \"3b6f82d6-ef74-4c0e-99ab-426f3b10334f\") " pod="metallb-system/frr-k8s-9t496" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.690846 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5b97\" (UniqueName: \"kubernetes.io/projected/1f684039-0afb-44b1-a206-83d818ab3f9b-kube-api-access-g5b97\") pod \"controller-f8648f98b-fw4x8\" (UID: \"1f684039-0afb-44b1-a206-83d818ab3f9b\") " pod="metallb-system/controller-f8648f98b-fw4x8" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.690877 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f684039-0afb-44b1-a206-83d818ab3f9b-cert\") pod \"controller-f8648f98b-fw4x8\" (UID: \"1f684039-0afb-44b1-a206-83d818ab3f9b\") " pod="metallb-system/controller-f8648f98b-fw4x8" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.690908 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b8a0c114-08fb-4583-a9eb-09333023b0ed-memberlist\") pod \"speaker-xx5ts\" (UID: \"b8a0c114-08fb-4583-a9eb-09333023b0ed\") " pod="metallb-system/speaker-xx5ts" Dec 10 12:30:30 crc kubenswrapper[4689]: E1210 12:30:30.690932 4689 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Dec 10 12:30:30 crc kubenswrapper[4689]: E1210 12:30:30.691019 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b6f82d6-ef74-4c0e-99ab-426f3b10334f-metrics-certs podName:3b6f82d6-ef74-4c0e-99ab-426f3b10334f nodeName:}" failed. No retries permitted until 2025-12-10 12:30:31.190994835 +0000 UTC m=+898.979075973 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3b6f82d6-ef74-4c0e-99ab-426f3b10334f-metrics-certs") pod "frr-k8s-9t496" (UID: "3b6f82d6-ef74-4c0e-99ab-426f3b10334f") : secret "frr-k8s-certs-secret" not found Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.690936 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/3b6f82d6-ef74-4c0e-99ab-426f3b10334f-frr-conf\") pod \"frr-k8s-9t496\" (UID: \"3b6f82d6-ef74-4c0e-99ab-426f3b10334f\") " pod="metallb-system/frr-k8s-9t496" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.691058 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/3b6f82d6-ef74-4c0e-99ab-426f3b10334f-reloader\") pod \"frr-k8s-9t496\" (UID: \"3b6f82d6-ef74-4c0e-99ab-426f3b10334f\") " pod="metallb-system/frr-k8s-9t496" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.691074 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/3b6f82d6-ef74-4c0e-99ab-426f3b10334f-metrics\") pod \"frr-k8s-9t496\" (UID: \"3b6f82d6-ef74-4c0e-99ab-426f3b10334f\") " pod="metallb-system/frr-k8s-9t496" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.691140 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkft8\" (UniqueName: \"kubernetes.io/projected/da40a320-7203-4d3b-bc0e-c9eb09a07898-kube-api-access-gkft8\") pod \"frr-k8s-webhook-server-7fcb986d4-9pghn\" (UID: \"da40a320-7203-4d3b-bc0e-c9eb09a07898\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9pghn" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.691195 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/3b6f82d6-ef74-4c0e-99ab-426f3b10334f-frr-sockets\") pod \"frr-k8s-9t496\" (UID: \"3b6f82d6-ef74-4c0e-99ab-426f3b10334f\") " pod="metallb-system/frr-k8s-9t496" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.691237 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/3b6f82d6-ef74-4c0e-99ab-426f3b10334f-frr-conf\") pod \"frr-k8s-9t496\" (UID: \"3b6f82d6-ef74-4c0e-99ab-426f3b10334f\") " pod="metallb-system/frr-k8s-9t496" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.691238 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da40a320-7203-4d3b-bc0e-c9eb09a07898-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-9pghn\" (UID: \"da40a320-7203-4d3b-bc0e-c9eb09a07898\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9pghn" Dec 10 12:30:30 crc kubenswrapper[4689]: E1210 12:30:30.691285 4689 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.691291 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg6v6\" (UniqueName: \"kubernetes.io/projected/3b6f82d6-ef74-4c0e-99ab-426f3b10334f-kube-api-access-kg6v6\") pod \"frr-k8s-9t496\" (UID: \"3b6f82d6-ef74-4c0e-99ab-426f3b10334f\") " pod="metallb-system/frr-k8s-9t496" Dec 10 12:30:30 crc kubenswrapper[4689]: E1210 12:30:30.691311 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da40a320-7203-4d3b-bc0e-c9eb09a07898-cert podName:da40a320-7203-4d3b-bc0e-c9eb09a07898 nodeName:}" failed. No retries permitted until 2025-12-10 12:30:31.191302392 +0000 UTC m=+898.979383530 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/da40a320-7203-4d3b-bc0e-c9eb09a07898-cert") pod "frr-k8s-webhook-server-7fcb986d4-9pghn" (UID: "da40a320-7203-4d3b-bc0e-c9eb09a07898") : secret "frr-k8s-webhook-server-cert" not found Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.691349 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f684039-0afb-44b1-a206-83d818ab3f9b-metrics-certs\") pod \"controller-f8648f98b-fw4x8\" (UID: \"1f684039-0afb-44b1-a206-83d818ab3f9b\") " pod="metallb-system/controller-f8648f98b-fw4x8" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.691750 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/3b6f82d6-ef74-4c0e-99ab-426f3b10334f-frr-sockets\") pod \"frr-k8s-9t496\" (UID: \"3b6f82d6-ef74-4c0e-99ab-426f3b10334f\") " pod="metallb-system/frr-k8s-9t496" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.691861 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/3b6f82d6-ef74-4c0e-99ab-426f3b10334f-metrics\") pod \"frr-k8s-9t496\" (UID: \"3b6f82d6-ef74-4c0e-99ab-426f3b10334f\") " pod="metallb-system/frr-k8s-9t496" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.692258 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/3b6f82d6-ef74-4c0e-99ab-426f3b10334f-frr-startup\") pod \"frr-k8s-9t496\" (UID: \"3b6f82d6-ef74-4c0e-99ab-426f3b10334f\") " pod="metallb-system/frr-k8s-9t496" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.715578 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkft8\" (UniqueName: \"kubernetes.io/projected/da40a320-7203-4d3b-bc0e-c9eb09a07898-kube-api-access-gkft8\") pod \"frr-k8s-webhook-server-7fcb986d4-9pghn\" (UID: \"da40a320-7203-4d3b-bc0e-c9eb09a07898\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9pghn" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.716114 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg6v6\" (UniqueName: \"kubernetes.io/projected/3b6f82d6-ef74-4c0e-99ab-426f3b10334f-kube-api-access-kg6v6\") pod \"frr-k8s-9t496\" (UID: \"3b6f82d6-ef74-4c0e-99ab-426f3b10334f\") " pod="metallb-system/frr-k8s-9t496" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.792531 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b8a0c114-08fb-4583-a9eb-09333023b0ed-metallb-excludel2\") pod \"speaker-xx5ts\" (UID: \"b8a0c114-08fb-4583-a9eb-09333023b0ed\") " pod="metallb-system/speaker-xx5ts" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.792594 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5b97\" (UniqueName: \"kubernetes.io/projected/1f684039-0afb-44b1-a206-83d818ab3f9b-kube-api-access-g5b97\") pod \"controller-f8648f98b-fw4x8\" (UID: \"1f684039-0afb-44b1-a206-83d818ab3f9b\") " pod="metallb-system/controller-f8648f98b-fw4x8" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.792612 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f684039-0afb-44b1-a206-83d818ab3f9b-cert\") pod \"controller-f8648f98b-fw4x8\" (UID: \"1f684039-0afb-44b1-a206-83d818ab3f9b\") " pod="metallb-system/controller-f8648f98b-fw4x8" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.792628 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b8a0c114-08fb-4583-a9eb-09333023b0ed-memberlist\") pod \"speaker-xx5ts\" (UID: \"b8a0c114-08fb-4583-a9eb-09333023b0ed\") " pod="metallb-system/speaker-xx5ts" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.792683 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f684039-0afb-44b1-a206-83d818ab3f9b-metrics-certs\") pod \"controller-f8648f98b-fw4x8\" (UID: \"1f684039-0afb-44b1-a206-83d818ab3f9b\") " pod="metallb-system/controller-f8648f98b-fw4x8" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.792705 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8a0c114-08fb-4583-a9eb-09333023b0ed-metrics-certs\") pod \"speaker-xx5ts\" (UID: \"b8a0c114-08fb-4583-a9eb-09333023b0ed\") " pod="metallb-system/speaker-xx5ts" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.792723 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxklr\" (UniqueName: \"kubernetes.io/projected/b8a0c114-08fb-4583-a9eb-09333023b0ed-kube-api-access-fxklr\") pod \"speaker-xx5ts\" (UID: \"b8a0c114-08fb-4583-a9eb-09333023b0ed\") " pod="metallb-system/speaker-xx5ts" Dec 10 12:30:30 crc kubenswrapper[4689]: E1210 12:30:30.793413 4689 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.793448 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b8a0c114-08fb-4583-a9eb-09333023b0ed-metallb-excludel2\") pod \"speaker-xx5ts\" (UID: \"b8a0c114-08fb-4583-a9eb-09333023b0ed\") " pod="metallb-system/speaker-xx5ts" Dec 10 12:30:30 crc kubenswrapper[4689]: E1210 12:30:30.793510 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8a0c114-08fb-4583-a9eb-09333023b0ed-metrics-certs podName:b8a0c114-08fb-4583-a9eb-09333023b0ed nodeName:}" failed. No retries permitted until 2025-12-10 12:30:31.293474702 +0000 UTC m=+899.081555860 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b8a0c114-08fb-4583-a9eb-09333023b0ed-metrics-certs") pod "speaker-xx5ts" (UID: "b8a0c114-08fb-4583-a9eb-09333023b0ed") : secret "speaker-certs-secret" not found Dec 10 12:30:30 crc kubenswrapper[4689]: E1210 12:30:30.793549 4689 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 10 12:30:30 crc kubenswrapper[4689]: E1210 12:30:30.793593 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8a0c114-08fb-4583-a9eb-09333023b0ed-memberlist podName:b8a0c114-08fb-4583-a9eb-09333023b0ed nodeName:}" failed. No retries permitted until 2025-12-10 12:30:31.293577225 +0000 UTC m=+899.081658363 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b8a0c114-08fb-4583-a9eb-09333023b0ed-memberlist") pod "speaker-xx5ts" (UID: "b8a0c114-08fb-4583-a9eb-09333023b0ed") : secret "metallb-memberlist" not found Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.796089 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f684039-0afb-44b1-a206-83d818ab3f9b-cert\") pod \"controller-f8648f98b-fw4x8\" (UID: \"1f684039-0afb-44b1-a206-83d818ab3f9b\") " pod="metallb-system/controller-f8648f98b-fw4x8" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.799515 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f684039-0afb-44b1-a206-83d818ab3f9b-metrics-certs\") pod \"controller-f8648f98b-fw4x8\" (UID: \"1f684039-0afb-44b1-a206-83d818ab3f9b\") " pod="metallb-system/controller-f8648f98b-fw4x8" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.812543 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxklr\" (UniqueName: \"kubernetes.io/projected/b8a0c114-08fb-4583-a9eb-09333023b0ed-kube-api-access-fxklr\") pod \"speaker-xx5ts\" (UID: \"b8a0c114-08fb-4583-a9eb-09333023b0ed\") " pod="metallb-system/speaker-xx5ts" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.818842 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5b97\" (UniqueName: \"kubernetes.io/projected/1f684039-0afb-44b1-a206-83d818ab3f9b-kube-api-access-g5b97\") pod \"controller-f8648f98b-fw4x8\" (UID: \"1f684039-0afb-44b1-a206-83d818ab3f9b\") " pod="metallb-system/controller-f8648f98b-fw4x8" Dec 10 12:30:30 crc kubenswrapper[4689]: I1210 12:30:30.911451 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-fw4x8" Dec 10 12:30:31 crc kubenswrapper[4689]: I1210 12:30:31.197430 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3b6f82d6-ef74-4c0e-99ab-426f3b10334f-metrics-certs\") pod \"frr-k8s-9t496\" (UID: \"3b6f82d6-ef74-4c0e-99ab-426f3b10334f\") " pod="metallb-system/frr-k8s-9t496" Dec 10 12:30:31 crc kubenswrapper[4689]: I1210 12:30:31.197844 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da40a320-7203-4d3b-bc0e-c9eb09a07898-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-9pghn\" (UID: \"da40a320-7203-4d3b-bc0e-c9eb09a07898\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9pghn" Dec 10 12:30:31 crc kubenswrapper[4689]: I1210 12:30:31.202104 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3b6f82d6-ef74-4c0e-99ab-426f3b10334f-metrics-certs\") pod \"frr-k8s-9t496\" (UID: \"3b6f82d6-ef74-4c0e-99ab-426f3b10334f\") " pod="metallb-system/frr-k8s-9t496" Dec 10 12:30:31 crc kubenswrapper[4689]: I1210 12:30:31.202577 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da40a320-7203-4d3b-bc0e-c9eb09a07898-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-9pghn\" (UID: \"da40a320-7203-4d3b-bc0e-c9eb09a07898\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9pghn" Dec 10 12:30:31 crc kubenswrapper[4689]: I1210 12:30:31.299577 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8a0c114-08fb-4583-a9eb-09333023b0ed-metrics-certs\") pod \"speaker-xx5ts\" (UID: \"b8a0c114-08fb-4583-a9eb-09333023b0ed\") " pod="metallb-system/speaker-xx5ts" Dec 10 12:30:31 crc kubenswrapper[4689]: I1210 12:30:31.299741 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b8a0c114-08fb-4583-a9eb-09333023b0ed-memberlist\") pod \"speaker-xx5ts\" (UID: \"b8a0c114-08fb-4583-a9eb-09333023b0ed\") " pod="metallb-system/speaker-xx5ts" Dec 10 12:30:31 crc kubenswrapper[4689]: E1210 12:30:31.299959 4689 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 10 12:30:31 crc kubenswrapper[4689]: E1210 12:30:31.300069 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8a0c114-08fb-4583-a9eb-09333023b0ed-memberlist podName:b8a0c114-08fb-4583-a9eb-09333023b0ed nodeName:}" failed. No retries permitted until 2025-12-10 12:30:32.300046645 +0000 UTC m=+900.088127813 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b8a0c114-08fb-4583-a9eb-09333023b0ed-memberlist") pod "speaker-xx5ts" (UID: "b8a0c114-08fb-4583-a9eb-09333023b0ed") : secret "metallb-memberlist" not found Dec 10 12:30:31 crc kubenswrapper[4689]: I1210 12:30:31.306483 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8a0c114-08fb-4583-a9eb-09333023b0ed-metrics-certs\") pod \"speaker-xx5ts\" (UID: \"b8a0c114-08fb-4583-a9eb-09333023b0ed\") " pod="metallb-system/speaker-xx5ts" Dec 10 12:30:31 crc kubenswrapper[4689]: I1210 12:30:31.322313 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-fw4x8"] Dec 10 12:30:31 crc kubenswrapper[4689]: I1210 12:30:31.392958 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9pghn" Dec 10 12:30:31 crc kubenswrapper[4689]: I1210 12:30:31.411297 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9t496" Dec 10 12:30:31 crc kubenswrapper[4689]: I1210 12:30:31.659685 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-fw4x8" event={"ID":"1f684039-0afb-44b1-a206-83d818ab3f9b","Type":"ContainerStarted","Data":"5b13728b6752fecc38bb0d393b6d1277354405e63dca0c40b12e8b81122629da"} Dec 10 12:30:31 crc kubenswrapper[4689]: I1210 12:30:31.660360 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-fw4x8" event={"ID":"1f684039-0afb-44b1-a206-83d818ab3f9b","Type":"ContainerStarted","Data":"2a31a78ca83bf6ccd9e55d595fe3ec3cb28b6ef09d1b127406e929ab2f7de5ac"} Dec 10 12:30:31 crc kubenswrapper[4689]: I1210 12:30:31.661336 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9t496" event={"ID":"3b6f82d6-ef74-4c0e-99ab-426f3b10334f","Type":"ContainerStarted","Data":"b723fa26d9567f41fb396370ab5b12d0874e254a2e49319b39ca1690282d4e5f"} Dec 10 12:30:31 crc kubenswrapper[4689]: I1210 12:30:31.698748 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-9pghn"] Dec 10 12:30:31 crc kubenswrapper[4689]: W1210 12:30:31.703295 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda40a320_7203_4d3b_bc0e_c9eb09a07898.slice/crio-740b9e361f93939f796961a598b1720dddeb622f96ffc6ec81ab05f1fa37320a WatchSource:0}: Error finding container 740b9e361f93939f796961a598b1720dddeb622f96ffc6ec81ab05f1fa37320a: Status 404 returned error can't find the container with id 740b9e361f93939f796961a598b1720dddeb622f96ffc6ec81ab05f1fa37320a Dec 10 12:30:32 crc kubenswrapper[4689]: I1210 12:30:32.319419 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b8a0c114-08fb-4583-a9eb-09333023b0ed-memberlist\") pod \"speaker-xx5ts\" (UID: \"b8a0c114-08fb-4583-a9eb-09333023b0ed\") " pod="metallb-system/speaker-xx5ts" Dec 10 12:30:32 crc kubenswrapper[4689]: I1210 12:30:32.330949 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b8a0c114-08fb-4583-a9eb-09333023b0ed-memberlist\") pod \"speaker-xx5ts\" (UID: \"b8a0c114-08fb-4583-a9eb-09333023b0ed\") " pod="metallb-system/speaker-xx5ts" Dec 10 12:30:32 crc kubenswrapper[4689]: I1210 12:30:32.373613 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-xx5ts" Dec 10 12:30:32 crc kubenswrapper[4689]: I1210 12:30:32.672033 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-xx5ts" event={"ID":"b8a0c114-08fb-4583-a9eb-09333023b0ed","Type":"ContainerStarted","Data":"fc9182ebee8039c9cfd2f2456548e4b14b87ddd417eaf32ba35d15a912d03aae"} Dec 10 12:30:32 crc kubenswrapper[4689]: I1210 12:30:32.672501 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-xx5ts" event={"ID":"b8a0c114-08fb-4583-a9eb-09333023b0ed","Type":"ContainerStarted","Data":"b8ef7a1684d033d43c0acc018e44c7c160e337d6f201b6dbc74bf30aed0d9b1d"} Dec 10 12:30:32 crc kubenswrapper[4689]: I1210 12:30:32.676070 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-fw4x8" event={"ID":"1f684039-0afb-44b1-a206-83d818ab3f9b","Type":"ContainerStarted","Data":"6b5d6d0b7051be5cb09d168050459cd4c463fac684c3800b9ed9335821a49567"} Dec 10 12:30:32 crc kubenswrapper[4689]: I1210 12:30:32.676193 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-fw4x8" Dec 10 12:30:32 crc kubenswrapper[4689]: I1210 12:30:32.677285 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9pghn" event={"ID":"da40a320-7203-4d3b-bc0e-c9eb09a07898","Type":"ContainerStarted","Data":"740b9e361f93939f796961a598b1720dddeb622f96ffc6ec81ab05f1fa37320a"} Dec 10 12:30:32 crc kubenswrapper[4689]: I1210 12:30:32.690649 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-fw4x8" podStartSLOduration=2.690631549 podStartE2EDuration="2.690631549s" podCreationTimestamp="2025-12-10 12:30:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:30:32.689173653 +0000 UTC m=+900.477254791" watchObservedRunningTime="2025-12-10 12:30:32.690631549 +0000 UTC m=+900.478712687" Dec 10 12:30:33 crc kubenswrapper[4689]: I1210 12:30:33.694256 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-xx5ts" event={"ID":"b8a0c114-08fb-4583-a9eb-09333023b0ed","Type":"ContainerStarted","Data":"135e97be69c2677b0213cfb88d0e038f3441aa216a99e28c757d42d471f98383"} Dec 10 12:30:33 crc kubenswrapper[4689]: I1210 12:30:33.694450 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-xx5ts" Dec 10 12:30:37 crc kubenswrapper[4689]: I1210 12:30:37.177313 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 12:30:37 crc kubenswrapper[4689]: I1210 12:30:37.177865 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 12:30:40 crc kubenswrapper[4689]: I1210 12:30:40.748235 4689 generic.go:334] "Generic (PLEG): container finished" podID="3b6f82d6-ef74-4c0e-99ab-426f3b10334f" containerID="b9f5330f39ce93cd8778946670c660cf89b70ee11944aba085dc745605c58e27" exitCode=0 Dec 10 12:30:40 crc kubenswrapper[4689]: I1210 12:30:40.748335 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9t496" event={"ID":"3b6f82d6-ef74-4c0e-99ab-426f3b10334f","Type":"ContainerDied","Data":"b9f5330f39ce93cd8778946670c660cf89b70ee11944aba085dc745605c58e27"} Dec 10 12:30:40 crc kubenswrapper[4689]: I1210 12:30:40.750739 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9pghn" event={"ID":"da40a320-7203-4d3b-bc0e-c9eb09a07898","Type":"ContainerStarted","Data":"daef7d272186a38b2b59352295e9d9a946c508b6eaa2f733cded7e98e4fa0d2d"} Dec 10 12:30:40 crc kubenswrapper[4689]: I1210 12:30:40.750964 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9pghn" Dec 10 12:30:40 crc kubenswrapper[4689]: I1210 12:30:40.792367 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-xx5ts" podStartSLOduration=10.792343535 podStartE2EDuration="10.792343535s" podCreationTimestamp="2025-12-10 12:30:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:30:33.713495567 +0000 UTC m=+901.501576725" watchObservedRunningTime="2025-12-10 12:30:40.792343535 +0000 UTC m=+908.580424703" Dec 10 12:30:40 crc kubenswrapper[4689]: I1210 12:30:40.819873 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9pghn" podStartSLOduration=2.578389648 podStartE2EDuration="10.81984407s" podCreationTimestamp="2025-12-10 12:30:30 +0000 UTC" firstStartedPulling="2025-12-10 12:30:31.707019192 +0000 UTC m=+899.495100320" lastFinishedPulling="2025-12-10 12:30:39.948473564 +0000 UTC m=+907.736554742" observedRunningTime="2025-12-10 12:30:40.816807865 +0000 UTC m=+908.604889033" watchObservedRunningTime="2025-12-10 12:30:40.81984407 +0000 UTC m=+908.607925248" Dec 10 12:30:41 crc kubenswrapper[4689]: I1210 12:30:41.772936 4689 generic.go:334] "Generic (PLEG): container finished" podID="3b6f82d6-ef74-4c0e-99ab-426f3b10334f" containerID="0854e63a1845b8297f420156f207ff88326361faa95773721fccf6b251a0340d" exitCode=0 Dec 10 12:30:41 crc kubenswrapper[4689]: I1210 12:30:41.773231 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9t496" event={"ID":"3b6f82d6-ef74-4c0e-99ab-426f3b10334f","Type":"ContainerDied","Data":"0854e63a1845b8297f420156f207ff88326361faa95773721fccf6b251a0340d"} Dec 10 12:30:42 crc kubenswrapper[4689]: I1210 12:30:42.378476 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-xx5ts" Dec 10 12:30:42 crc kubenswrapper[4689]: I1210 12:30:42.782017 4689 generic.go:334] "Generic (PLEG): container finished" podID="3b6f82d6-ef74-4c0e-99ab-426f3b10334f" containerID="0f5d5381772b454563077d6e15fe5831f68a21406d4548837e573843cfd18dd7" exitCode=0 Dec 10 12:30:42 crc kubenswrapper[4689]: I1210 12:30:42.782085 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9t496" event={"ID":"3b6f82d6-ef74-4c0e-99ab-426f3b10334f","Type":"ContainerDied","Data":"0f5d5381772b454563077d6e15fe5831f68a21406d4548837e573843cfd18dd7"} Dec 10 12:30:43 crc kubenswrapper[4689]: I1210 12:30:43.799129 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9t496" event={"ID":"3b6f82d6-ef74-4c0e-99ab-426f3b10334f","Type":"ContainerStarted","Data":"f47aaa1234f71dfbdb87d842eb0da421ec6c417b45f469fe9ffdf9852dc40ac1"} Dec 10 12:30:43 crc kubenswrapper[4689]: I1210 12:30:43.799433 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9t496" event={"ID":"3b6f82d6-ef74-4c0e-99ab-426f3b10334f","Type":"ContainerStarted","Data":"cb9953197c6a1ece0314cc2bc18dd7f6ac874ddd0f010cab10ec1fa4b300c25c"} Dec 10 12:30:43 crc kubenswrapper[4689]: I1210 12:30:43.799444 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9t496" event={"ID":"3b6f82d6-ef74-4c0e-99ab-426f3b10334f","Type":"ContainerStarted","Data":"c9b639056f2a2f97afa506c4e3a6d8edb77b0daa0298f2ae0e5b37ddf41dc93f"} Dec 10 12:30:43 crc kubenswrapper[4689]: I1210 12:30:43.799455 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9t496" event={"ID":"3b6f82d6-ef74-4c0e-99ab-426f3b10334f","Type":"ContainerStarted","Data":"8d339ec2d00b833b66cbbae4a1c9e8a8b296a3b81d334edbd78238dbd5569efb"} Dec 10 12:30:43 crc kubenswrapper[4689]: I1210 12:30:43.799466 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9t496" event={"ID":"3b6f82d6-ef74-4c0e-99ab-426f3b10334f","Type":"ContainerStarted","Data":"9c1554d016037fbc4fe36915c5ea7ff9b627cae1a5b3670db5638b999362af29"} Dec 10 12:30:45 crc kubenswrapper[4689]: I1210 12:30:45.268580 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-4n9s5"] Dec 10 12:30:45 crc kubenswrapper[4689]: I1210 12:30:45.269785 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4n9s5" Dec 10 12:30:45 crc kubenswrapper[4689]: I1210 12:30:45.272088 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-vhvgw" Dec 10 12:30:45 crc kubenswrapper[4689]: I1210 12:30:45.272256 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 10 12:30:45 crc kubenswrapper[4689]: I1210 12:30:45.273862 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 10 12:30:45 crc kubenswrapper[4689]: I1210 12:30:45.280654 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-4n9s5"] Dec 10 12:30:45 crc kubenswrapper[4689]: I1210 12:30:45.424396 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t9sl\" (UniqueName: \"kubernetes.io/projected/67581cb5-0363-4db7-b9e8-b912f5449346-kube-api-access-7t9sl\") pod \"openstack-operator-index-4n9s5\" (UID: \"67581cb5-0363-4db7-b9e8-b912f5449346\") " pod="openstack-operators/openstack-operator-index-4n9s5" Dec 10 12:30:45 crc kubenswrapper[4689]: I1210 12:30:45.526006 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t9sl\" (UniqueName: \"kubernetes.io/projected/67581cb5-0363-4db7-b9e8-b912f5449346-kube-api-access-7t9sl\") pod \"openstack-operator-index-4n9s5\" (UID: \"67581cb5-0363-4db7-b9e8-b912f5449346\") " pod="openstack-operators/openstack-operator-index-4n9s5" Dec 10 12:30:45 crc kubenswrapper[4689]: I1210 12:30:45.544384 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t9sl\" (UniqueName: \"kubernetes.io/projected/67581cb5-0363-4db7-b9e8-b912f5449346-kube-api-access-7t9sl\") pod \"openstack-operator-index-4n9s5\" (UID: \"67581cb5-0363-4db7-b9e8-b912f5449346\") " pod="openstack-operators/openstack-operator-index-4n9s5" Dec 10 12:30:45 crc kubenswrapper[4689]: I1210 12:30:45.594745 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4n9s5" Dec 10 12:30:45 crc kubenswrapper[4689]: I1210 12:30:45.832687 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9t496" event={"ID":"3b6f82d6-ef74-4c0e-99ab-426f3b10334f","Type":"ContainerStarted","Data":"d692a8a421500e70027f52bdd928aea86157ff4b2753942b8ffd64c493d75daf"} Dec 10 12:30:45 crc kubenswrapper[4689]: I1210 12:30:45.832892 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-9t496" Dec 10 12:30:45 crc kubenswrapper[4689]: I1210 12:30:45.875064 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-9t496" podStartSLOduration=7.492187745 podStartE2EDuration="15.875044876s" podCreationTimestamp="2025-12-10 12:30:30 +0000 UTC" firstStartedPulling="2025-12-10 12:30:31.570070894 +0000 UTC m=+899.358152032" lastFinishedPulling="2025-12-10 12:30:39.952927985 +0000 UTC m=+907.741009163" observedRunningTime="2025-12-10 12:30:45.868454571 +0000 UTC m=+913.656535749" watchObservedRunningTime="2025-12-10 12:30:45.875044876 +0000 UTC m=+913.663126034" Dec 10 12:30:46 crc kubenswrapper[4689]: I1210 12:30:46.017385 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-4n9s5"] Dec 10 12:30:46 crc kubenswrapper[4689]: W1210 12:30:46.022611 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67581cb5_0363_4db7_b9e8_b912f5449346.slice/crio-d39cfa57af0469adab39bd4c25da2dce5029504c8c01b416ecf923602085274d WatchSource:0}: Error finding container d39cfa57af0469adab39bd4c25da2dce5029504c8c01b416ecf923602085274d: Status 404 returned error can't find the container with id d39cfa57af0469adab39bd4c25da2dce5029504c8c01b416ecf923602085274d Dec 10 12:30:46 crc kubenswrapper[4689]: I1210 12:30:46.411951 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-9t496" Dec 10 12:30:46 crc kubenswrapper[4689]: I1210 12:30:46.446697 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-9t496" Dec 10 12:30:46 crc kubenswrapper[4689]: I1210 12:30:46.839360 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4n9s5" event={"ID":"67581cb5-0363-4db7-b9e8-b912f5449346","Type":"ContainerStarted","Data":"d39cfa57af0469adab39bd4c25da2dce5029504c8c01b416ecf923602085274d"} Dec 10 12:30:48 crc kubenswrapper[4689]: I1210 12:30:48.635779 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-4n9s5"] Dec 10 12:30:48 crc kubenswrapper[4689]: I1210 12:30:48.855048 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4n9s5" event={"ID":"67581cb5-0363-4db7-b9e8-b912f5449346","Type":"ContainerStarted","Data":"c74516fa1aab17b9d24393d3b4662f9ffb2c4ccb2cfb92c70cea119595a8b2d4"} Dec 10 12:30:48 crc kubenswrapper[4689]: I1210 12:30:48.876451 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-4n9s5" podStartSLOduration=1.611257529 podStartE2EDuration="3.876429521s" podCreationTimestamp="2025-12-10 12:30:45 +0000 UTC" firstStartedPulling="2025-12-10 12:30:46.024266239 +0000 UTC m=+913.812347377" lastFinishedPulling="2025-12-10 12:30:48.289438231 +0000 UTC m=+916.077519369" observedRunningTime="2025-12-10 12:30:48.874170115 +0000 UTC m=+916.662251283" watchObservedRunningTime="2025-12-10 12:30:48.876429521 +0000 UTC m=+916.664510669" Dec 10 12:30:49 crc kubenswrapper[4689]: I1210 12:30:49.245021 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-4crnk"] Dec 10 12:30:49 crc kubenswrapper[4689]: I1210 12:30:49.246535 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4crnk" Dec 10 12:30:49 crc kubenswrapper[4689]: I1210 12:30:49.262228 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-4crnk"] Dec 10 12:30:49 crc kubenswrapper[4689]: I1210 12:30:49.385338 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mrs5\" (UniqueName: \"kubernetes.io/projected/baa246ec-275c-44ad-9e71-8aace1bf29b0-kube-api-access-7mrs5\") pod \"openstack-operator-index-4crnk\" (UID: \"baa246ec-275c-44ad-9e71-8aace1bf29b0\") " pod="openstack-operators/openstack-operator-index-4crnk" Dec 10 12:30:49 crc kubenswrapper[4689]: I1210 12:30:49.486983 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mrs5\" (UniqueName: \"kubernetes.io/projected/baa246ec-275c-44ad-9e71-8aace1bf29b0-kube-api-access-7mrs5\") pod \"openstack-operator-index-4crnk\" (UID: \"baa246ec-275c-44ad-9e71-8aace1bf29b0\") " pod="openstack-operators/openstack-operator-index-4crnk" Dec 10 12:30:49 crc kubenswrapper[4689]: I1210 12:30:49.514504 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mrs5\" (UniqueName: \"kubernetes.io/projected/baa246ec-275c-44ad-9e71-8aace1bf29b0-kube-api-access-7mrs5\") pod \"openstack-operator-index-4crnk\" (UID: \"baa246ec-275c-44ad-9e71-8aace1bf29b0\") " pod="openstack-operators/openstack-operator-index-4crnk" Dec 10 12:30:49 crc kubenswrapper[4689]: I1210 12:30:49.590902 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4crnk" Dec 10 12:30:49 crc kubenswrapper[4689]: I1210 12:30:49.862520 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-4n9s5" podUID="67581cb5-0363-4db7-b9e8-b912f5449346" containerName="registry-server" containerID="cri-o://c74516fa1aab17b9d24393d3b4662f9ffb2c4ccb2cfb92c70cea119595a8b2d4" gracePeriod=2 Dec 10 12:30:50 crc kubenswrapper[4689]: I1210 12:30:50.037562 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-4crnk"] Dec 10 12:30:50 crc kubenswrapper[4689]: W1210 12:30:50.048632 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbaa246ec_275c_44ad_9e71_8aace1bf29b0.slice/crio-d3fc2d917dff3502a01118711eb5c98170bd590e93cf096e64073e0abb8c755f WatchSource:0}: Error finding container d3fc2d917dff3502a01118711eb5c98170bd590e93cf096e64073e0abb8c755f: Status 404 returned error can't find the container with id d3fc2d917dff3502a01118711eb5c98170bd590e93cf096e64073e0abb8c755f Dec 10 12:30:50 crc kubenswrapper[4689]: I1210 12:30:50.231390 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4n9s5" Dec 10 12:30:50 crc kubenswrapper[4689]: I1210 12:30:50.412644 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t9sl\" (UniqueName: \"kubernetes.io/projected/67581cb5-0363-4db7-b9e8-b912f5449346-kube-api-access-7t9sl\") pod \"67581cb5-0363-4db7-b9e8-b912f5449346\" (UID: \"67581cb5-0363-4db7-b9e8-b912f5449346\") " Dec 10 12:30:50 crc kubenswrapper[4689]: I1210 12:30:50.417032 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67581cb5-0363-4db7-b9e8-b912f5449346-kube-api-access-7t9sl" (OuterVolumeSpecName: "kube-api-access-7t9sl") pod "67581cb5-0363-4db7-b9e8-b912f5449346" (UID: "67581cb5-0363-4db7-b9e8-b912f5449346"). InnerVolumeSpecName "kube-api-access-7t9sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:30:50 crc kubenswrapper[4689]: I1210 12:30:50.515826 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t9sl\" (UniqueName: \"kubernetes.io/projected/67581cb5-0363-4db7-b9e8-b912f5449346-kube-api-access-7t9sl\") on node \"crc\" DevicePath \"\"" Dec 10 12:30:50 crc kubenswrapper[4689]: I1210 12:30:50.873517 4689 generic.go:334] "Generic (PLEG): container finished" podID="67581cb5-0363-4db7-b9e8-b912f5449346" containerID="c74516fa1aab17b9d24393d3b4662f9ffb2c4ccb2cfb92c70cea119595a8b2d4" exitCode=0 Dec 10 12:30:50 crc kubenswrapper[4689]: I1210 12:30:50.873649 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4n9s5" event={"ID":"67581cb5-0363-4db7-b9e8-b912f5449346","Type":"ContainerDied","Data":"c74516fa1aab17b9d24393d3b4662f9ffb2c4ccb2cfb92c70cea119595a8b2d4"} Dec 10 12:30:50 crc kubenswrapper[4689]: I1210 12:30:50.873719 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4n9s5" event={"ID":"67581cb5-0363-4db7-b9e8-b912f5449346","Type":"ContainerDied","Data":"d39cfa57af0469adab39bd4c25da2dce5029504c8c01b416ecf923602085274d"} Dec 10 12:30:50 crc kubenswrapper[4689]: I1210 12:30:50.873749 4689 scope.go:117] "RemoveContainer" containerID="c74516fa1aab17b9d24393d3b4662f9ffb2c4ccb2cfb92c70cea119595a8b2d4" Dec 10 12:30:50 crc kubenswrapper[4689]: I1210 12:30:50.874391 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4n9s5" Dec 10 12:30:50 crc kubenswrapper[4689]: I1210 12:30:50.877142 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4crnk" event={"ID":"baa246ec-275c-44ad-9e71-8aace1bf29b0","Type":"ContainerStarted","Data":"15c7f091725a05f523bbc22c4c250cad5d27434d08adaded5c4ec447ae7baeb9"} Dec 10 12:30:50 crc kubenswrapper[4689]: I1210 12:30:50.877255 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4crnk" event={"ID":"baa246ec-275c-44ad-9e71-8aace1bf29b0","Type":"ContainerStarted","Data":"d3fc2d917dff3502a01118711eb5c98170bd590e93cf096e64073e0abb8c755f"} Dec 10 12:30:50 crc kubenswrapper[4689]: I1210 12:30:50.905959 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-4crnk" podStartSLOduration=1.848817447 podStartE2EDuration="1.905930052s" podCreationTimestamp="2025-12-10 12:30:49 +0000 UTC" firstStartedPulling="2025-12-10 12:30:50.053077767 +0000 UTC m=+917.841158915" lastFinishedPulling="2025-12-10 12:30:50.110190382 +0000 UTC m=+917.898271520" observedRunningTime="2025-12-10 12:30:50.902944948 +0000 UTC m=+918.691026146" watchObservedRunningTime="2025-12-10 12:30:50.905930052 +0000 UTC m=+918.694011230" Dec 10 12:30:50 crc kubenswrapper[4689]: I1210 12:30:50.914762 4689 scope.go:117] "RemoveContainer" containerID="c74516fa1aab17b9d24393d3b4662f9ffb2c4ccb2cfb92c70cea119595a8b2d4" Dec 10 12:30:50 crc kubenswrapper[4689]: E1210 12:30:50.915478 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c74516fa1aab17b9d24393d3b4662f9ffb2c4ccb2cfb92c70cea119595a8b2d4\": container with ID starting with c74516fa1aab17b9d24393d3b4662f9ffb2c4ccb2cfb92c70cea119595a8b2d4 not found: ID does not exist" containerID="c74516fa1aab17b9d24393d3b4662f9ffb2c4ccb2cfb92c70cea119595a8b2d4" Dec 10 12:30:50 crc kubenswrapper[4689]: I1210 12:30:50.915558 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c74516fa1aab17b9d24393d3b4662f9ffb2c4ccb2cfb92c70cea119595a8b2d4"} err="failed to get container status \"c74516fa1aab17b9d24393d3b4662f9ffb2c4ccb2cfb92c70cea119595a8b2d4\": rpc error: code = NotFound desc = could not find container \"c74516fa1aab17b9d24393d3b4662f9ffb2c4ccb2cfb92c70cea119595a8b2d4\": container with ID starting with c74516fa1aab17b9d24393d3b4662f9ffb2c4ccb2cfb92c70cea119595a8b2d4 not found: ID does not exist" Dec 10 12:30:50 crc kubenswrapper[4689]: I1210 12:30:50.916723 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-fw4x8" Dec 10 12:30:50 crc kubenswrapper[4689]: I1210 12:30:50.923023 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-4n9s5"] Dec 10 12:30:50 crc kubenswrapper[4689]: I1210 12:30:50.928077 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-4n9s5"] Dec 10 12:30:51 crc kubenswrapper[4689]: I1210 12:30:51.400370 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9pghn" Dec 10 12:30:52 crc kubenswrapper[4689]: I1210 12:30:52.509940 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67581cb5-0363-4db7-b9e8-b912f5449346" path="/var/lib/kubelet/pods/67581cb5-0363-4db7-b9e8-b912f5449346/volumes" Dec 10 12:30:59 crc kubenswrapper[4689]: I1210 12:30:59.591564 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-4crnk" Dec 10 12:30:59 crc kubenswrapper[4689]: I1210 12:30:59.592088 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-4crnk" Dec 10 12:30:59 crc kubenswrapper[4689]: I1210 12:30:59.631276 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-4crnk" Dec 10 12:30:59 crc kubenswrapper[4689]: I1210 12:30:59.978406 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-4crnk" Dec 10 12:31:00 crc kubenswrapper[4689]: I1210 12:31:00.444327 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xr8zc"] Dec 10 12:31:00 crc kubenswrapper[4689]: E1210 12:31:00.445003 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67581cb5-0363-4db7-b9e8-b912f5449346" containerName="registry-server" Dec 10 12:31:00 crc kubenswrapper[4689]: I1210 12:31:00.445134 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="67581cb5-0363-4db7-b9e8-b912f5449346" containerName="registry-server" Dec 10 12:31:00 crc kubenswrapper[4689]: I1210 12:31:00.445415 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="67581cb5-0363-4db7-b9e8-b912f5449346" containerName="registry-server" Dec 10 12:31:00 crc kubenswrapper[4689]: I1210 12:31:00.446859 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xr8zc" Dec 10 12:31:00 crc kubenswrapper[4689]: I1210 12:31:00.453912 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xr8zc"] Dec 10 12:31:00 crc kubenswrapper[4689]: I1210 12:31:00.562088 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/635c2d5a-2f12-46cf-86e2-ec4f689039e8-catalog-content\") pod \"certified-operators-xr8zc\" (UID: \"635c2d5a-2f12-46cf-86e2-ec4f689039e8\") " pod="openshift-marketplace/certified-operators-xr8zc" Dec 10 12:31:00 crc kubenswrapper[4689]: I1210 12:31:00.562399 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7n74\" (UniqueName: \"kubernetes.io/projected/635c2d5a-2f12-46cf-86e2-ec4f689039e8-kube-api-access-m7n74\") pod \"certified-operators-xr8zc\" (UID: \"635c2d5a-2f12-46cf-86e2-ec4f689039e8\") " pod="openshift-marketplace/certified-operators-xr8zc" Dec 10 12:31:00 crc kubenswrapper[4689]: I1210 12:31:00.562548 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/635c2d5a-2f12-46cf-86e2-ec4f689039e8-utilities\") pod \"certified-operators-xr8zc\" (UID: \"635c2d5a-2f12-46cf-86e2-ec4f689039e8\") " pod="openshift-marketplace/certified-operators-xr8zc" Dec 10 12:31:00 crc kubenswrapper[4689]: I1210 12:31:00.664227 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/635c2d5a-2f12-46cf-86e2-ec4f689039e8-catalog-content\") pod \"certified-operators-xr8zc\" (UID: \"635c2d5a-2f12-46cf-86e2-ec4f689039e8\") " pod="openshift-marketplace/certified-operators-xr8zc" Dec 10 12:31:00 crc kubenswrapper[4689]: I1210 12:31:00.664302 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7n74\" (UniqueName: \"kubernetes.io/projected/635c2d5a-2f12-46cf-86e2-ec4f689039e8-kube-api-access-m7n74\") pod \"certified-operators-xr8zc\" (UID: \"635c2d5a-2f12-46cf-86e2-ec4f689039e8\") " pod="openshift-marketplace/certified-operators-xr8zc" Dec 10 12:31:00 crc kubenswrapper[4689]: I1210 12:31:00.664331 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/635c2d5a-2f12-46cf-86e2-ec4f689039e8-utilities\") pod \"certified-operators-xr8zc\" (UID: \"635c2d5a-2f12-46cf-86e2-ec4f689039e8\") " pod="openshift-marketplace/certified-operators-xr8zc" Dec 10 12:31:00 crc kubenswrapper[4689]: I1210 12:31:00.664819 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/635c2d5a-2f12-46cf-86e2-ec4f689039e8-catalog-content\") pod \"certified-operators-xr8zc\" (UID: \"635c2d5a-2f12-46cf-86e2-ec4f689039e8\") " pod="openshift-marketplace/certified-operators-xr8zc" Dec 10 12:31:00 crc kubenswrapper[4689]: I1210 12:31:00.664953 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/635c2d5a-2f12-46cf-86e2-ec4f689039e8-utilities\") pod \"certified-operators-xr8zc\" (UID: \"635c2d5a-2f12-46cf-86e2-ec4f689039e8\") " pod="openshift-marketplace/certified-operators-xr8zc" Dec 10 12:31:00 crc kubenswrapper[4689]: I1210 12:31:00.692570 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7n74\" (UniqueName: \"kubernetes.io/projected/635c2d5a-2f12-46cf-86e2-ec4f689039e8-kube-api-access-m7n74\") pod \"certified-operators-xr8zc\" (UID: \"635c2d5a-2f12-46cf-86e2-ec4f689039e8\") " pod="openshift-marketplace/certified-operators-xr8zc" Dec 10 12:31:00 crc kubenswrapper[4689]: I1210 12:31:00.785386 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xr8zc" Dec 10 12:31:01 crc kubenswrapper[4689]: I1210 12:31:01.303939 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xr8zc"] Dec 10 12:31:01 crc kubenswrapper[4689]: W1210 12:31:01.308599 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod635c2d5a_2f12_46cf_86e2_ec4f689039e8.slice/crio-e02c3b207b6db756fd2f287417f609df042d1c689ec8acc6735827c8ef563ada WatchSource:0}: Error finding container e02c3b207b6db756fd2f287417f609df042d1c689ec8acc6735827c8ef563ada: Status 404 returned error can't find the container with id e02c3b207b6db756fd2f287417f609df042d1c689ec8acc6735827c8ef563ada Dec 10 12:31:01 crc kubenswrapper[4689]: I1210 12:31:01.414556 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-9t496" Dec 10 12:31:01 crc kubenswrapper[4689]: I1210 12:31:01.957131 4689 generic.go:334] "Generic (PLEG): container finished" podID="635c2d5a-2f12-46cf-86e2-ec4f689039e8" containerID="03302c78147c5d38086982fd9d21a4e814355e5c72656260819d0782a19557cc" exitCode=0 Dec 10 12:31:01 crc kubenswrapper[4689]: I1210 12:31:01.957222 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xr8zc" event={"ID":"635c2d5a-2f12-46cf-86e2-ec4f689039e8","Type":"ContainerDied","Data":"03302c78147c5d38086982fd9d21a4e814355e5c72656260819d0782a19557cc"} Dec 10 12:31:01 crc kubenswrapper[4689]: I1210 12:31:01.957294 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xr8zc" event={"ID":"635c2d5a-2f12-46cf-86e2-ec4f689039e8","Type":"ContainerStarted","Data":"e02c3b207b6db756fd2f287417f609df042d1c689ec8acc6735827c8ef563ada"} Dec 10 12:31:03 crc kubenswrapper[4689]: I1210 12:31:03.975317 4689 generic.go:334] "Generic (PLEG): container finished" podID="635c2d5a-2f12-46cf-86e2-ec4f689039e8" containerID="5733d1666a70659aafc423d534a21779afedc3e0c66841f5e087dc9410669e3f" exitCode=0 Dec 10 12:31:03 crc kubenswrapper[4689]: I1210 12:31:03.975403 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xr8zc" event={"ID":"635c2d5a-2f12-46cf-86e2-ec4f689039e8","Type":"ContainerDied","Data":"5733d1666a70659aafc423d534a21779afedc3e0c66841f5e087dc9410669e3f"} Dec 10 12:31:04 crc kubenswrapper[4689]: I1210 12:31:04.983495 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xr8zc" event={"ID":"635c2d5a-2f12-46cf-86e2-ec4f689039e8","Type":"ContainerStarted","Data":"5d0f4b11999205b5fcdf73d7eacd418042aad9f86eff93803280c1f041a81bcc"} Dec 10 12:31:05 crc kubenswrapper[4689]: I1210 12:31:05.010461 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xr8zc" podStartSLOduration=2.332372052 podStartE2EDuration="5.010446479s" podCreationTimestamp="2025-12-10 12:31:00 +0000 UTC" firstStartedPulling="2025-12-10 12:31:01.959457375 +0000 UTC m=+929.747538543" lastFinishedPulling="2025-12-10 12:31:04.637531792 +0000 UTC m=+932.425612970" observedRunningTime="2025-12-10 12:31:05.009607468 +0000 UTC m=+932.797688626" watchObservedRunningTime="2025-12-10 12:31:05.010446479 +0000 UTC m=+932.798527617" Dec 10 12:31:06 crc kubenswrapper[4689]: I1210 12:31:06.086043 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45"] Dec 10 12:31:06 crc kubenswrapper[4689]: I1210 12:31:06.087561 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45" Dec 10 12:31:06 crc kubenswrapper[4689]: I1210 12:31:06.089826 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-pflf8" Dec 10 12:31:06 crc kubenswrapper[4689]: I1210 12:31:06.101752 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45"] Dec 10 12:31:06 crc kubenswrapper[4689]: I1210 12:31:06.146641 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5985c32-65e5-453a-a56e-a95411e80db0-bundle\") pod \"941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45\" (UID: \"c5985c32-65e5-453a-a56e-a95411e80db0\") " pod="openstack-operators/941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45" Dec 10 12:31:06 crc kubenswrapper[4689]: I1210 12:31:06.146697 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5985c32-65e5-453a-a56e-a95411e80db0-util\") pod \"941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45\" (UID: \"c5985c32-65e5-453a-a56e-a95411e80db0\") " pod="openstack-operators/941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45" Dec 10 12:31:06 crc kubenswrapper[4689]: I1210 12:31:06.146896 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmj2j\" (UniqueName: \"kubernetes.io/projected/c5985c32-65e5-453a-a56e-a95411e80db0-kube-api-access-zmj2j\") pod \"941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45\" (UID: \"c5985c32-65e5-453a-a56e-a95411e80db0\") " pod="openstack-operators/941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45" Dec 10 12:31:06 crc kubenswrapper[4689]: I1210 12:31:06.248601 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5985c32-65e5-453a-a56e-a95411e80db0-util\") pod \"941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45\" (UID: \"c5985c32-65e5-453a-a56e-a95411e80db0\") " pod="openstack-operators/941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45" Dec 10 12:31:06 crc kubenswrapper[4689]: I1210 12:31:06.248773 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmj2j\" (UniqueName: \"kubernetes.io/projected/c5985c32-65e5-453a-a56e-a95411e80db0-kube-api-access-zmj2j\") pod \"941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45\" (UID: \"c5985c32-65e5-453a-a56e-a95411e80db0\") " pod="openstack-operators/941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45" Dec 10 12:31:06 crc kubenswrapper[4689]: I1210 12:31:06.248914 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5985c32-65e5-453a-a56e-a95411e80db0-bundle\") pod \"941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45\" (UID: \"c5985c32-65e5-453a-a56e-a95411e80db0\") " pod="openstack-operators/941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45" Dec 10 12:31:06 crc kubenswrapper[4689]: I1210 12:31:06.249443 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5985c32-65e5-453a-a56e-a95411e80db0-util\") pod \"941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45\" (UID: \"c5985c32-65e5-453a-a56e-a95411e80db0\") " pod="openstack-operators/941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45" Dec 10 12:31:06 crc kubenswrapper[4689]: I1210 12:31:06.249676 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5985c32-65e5-453a-a56e-a95411e80db0-bundle\") pod \"941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45\" (UID: \"c5985c32-65e5-453a-a56e-a95411e80db0\") " pod="openstack-operators/941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45" Dec 10 12:31:06 crc kubenswrapper[4689]: I1210 12:31:06.281111 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmj2j\" (UniqueName: \"kubernetes.io/projected/c5985c32-65e5-453a-a56e-a95411e80db0-kube-api-access-zmj2j\") pod \"941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45\" (UID: \"c5985c32-65e5-453a-a56e-a95411e80db0\") " pod="openstack-operators/941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45" Dec 10 12:31:06 crc kubenswrapper[4689]: I1210 12:31:06.406983 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45" Dec 10 12:31:06 crc kubenswrapper[4689]: I1210 12:31:06.617418 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45"] Dec 10 12:31:06 crc kubenswrapper[4689]: W1210 12:31:06.624423 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5985c32_65e5_453a_a56e_a95411e80db0.slice/crio-efa164461f2eb5f00afdd0c33ebf2479ca421a93201e66a8a99e42e12f1529a9 WatchSource:0}: Error finding container efa164461f2eb5f00afdd0c33ebf2479ca421a93201e66a8a99e42e12f1529a9: Status 404 returned error can't find the container with id efa164461f2eb5f00afdd0c33ebf2479ca421a93201e66a8a99e42e12f1529a9 Dec 10 12:31:07 crc kubenswrapper[4689]: I1210 12:31:07.015262 4689 generic.go:334] "Generic (PLEG): container finished" podID="c5985c32-65e5-453a-a56e-a95411e80db0" containerID="9e272875e6bcba7f91cdb32ad9911097efb14b86c23a1dfa3a339c4431fc84b9" exitCode=0 Dec 10 12:31:07 crc kubenswrapper[4689]: I1210 12:31:07.015336 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45" event={"ID":"c5985c32-65e5-453a-a56e-a95411e80db0","Type":"ContainerDied","Data":"9e272875e6bcba7f91cdb32ad9911097efb14b86c23a1dfa3a339c4431fc84b9"} Dec 10 12:31:07 crc kubenswrapper[4689]: I1210 12:31:07.015412 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45" event={"ID":"c5985c32-65e5-453a-a56e-a95411e80db0","Type":"ContainerStarted","Data":"efa164461f2eb5f00afdd0c33ebf2479ca421a93201e66a8a99e42e12f1529a9"} Dec 10 12:31:07 crc kubenswrapper[4689]: I1210 12:31:07.166431 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 12:31:07 crc kubenswrapper[4689]: I1210 12:31:07.166736 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 12:31:08 crc kubenswrapper[4689]: I1210 12:31:08.029123 4689 generic.go:334] "Generic (PLEG): container finished" podID="c5985c32-65e5-453a-a56e-a95411e80db0" containerID="db7e747543b4d42e03b13679c0239cc595e497c9316a5f382726fad6ecc8c957" exitCode=0 Dec 10 12:31:08 crc kubenswrapper[4689]: I1210 12:31:08.029189 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45" event={"ID":"c5985c32-65e5-453a-a56e-a95411e80db0","Type":"ContainerDied","Data":"db7e747543b4d42e03b13679c0239cc595e497c9316a5f382726fad6ecc8c957"} Dec 10 12:31:09 crc kubenswrapper[4689]: I1210 12:31:09.042016 4689 generic.go:334] "Generic (PLEG): container finished" podID="c5985c32-65e5-453a-a56e-a95411e80db0" containerID="96aabdc0f3ea55033bd08d2c937a3e939373bde059f4a46564edeaca23cb2e0e" exitCode=0 Dec 10 12:31:09 crc kubenswrapper[4689]: I1210 12:31:09.042148 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45" event={"ID":"c5985c32-65e5-453a-a56e-a95411e80db0","Type":"ContainerDied","Data":"96aabdc0f3ea55033bd08d2c937a3e939373bde059f4a46564edeaca23cb2e0e"} Dec 10 12:31:10 crc kubenswrapper[4689]: I1210 12:31:10.387939 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45" Dec 10 12:31:10 crc kubenswrapper[4689]: I1210 12:31:10.510761 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5985c32-65e5-453a-a56e-a95411e80db0-util\") pod \"c5985c32-65e5-453a-a56e-a95411e80db0\" (UID: \"c5985c32-65e5-453a-a56e-a95411e80db0\") " Dec 10 12:31:10 crc kubenswrapper[4689]: I1210 12:31:10.510862 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmj2j\" (UniqueName: \"kubernetes.io/projected/c5985c32-65e5-453a-a56e-a95411e80db0-kube-api-access-zmj2j\") pod \"c5985c32-65e5-453a-a56e-a95411e80db0\" (UID: \"c5985c32-65e5-453a-a56e-a95411e80db0\") " Dec 10 12:31:10 crc kubenswrapper[4689]: I1210 12:31:10.511345 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5985c32-65e5-453a-a56e-a95411e80db0-bundle\") pod \"c5985c32-65e5-453a-a56e-a95411e80db0\" (UID: \"c5985c32-65e5-453a-a56e-a95411e80db0\") " Dec 10 12:31:10 crc kubenswrapper[4689]: I1210 12:31:10.513051 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5985c32-65e5-453a-a56e-a95411e80db0-bundle" (OuterVolumeSpecName: "bundle") pod "c5985c32-65e5-453a-a56e-a95411e80db0" (UID: "c5985c32-65e5-453a-a56e-a95411e80db0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:31:10 crc kubenswrapper[4689]: I1210 12:31:10.520514 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5985c32-65e5-453a-a56e-a95411e80db0-kube-api-access-zmj2j" (OuterVolumeSpecName: "kube-api-access-zmj2j") pod "c5985c32-65e5-453a-a56e-a95411e80db0" (UID: "c5985c32-65e5-453a-a56e-a95411e80db0"). InnerVolumeSpecName "kube-api-access-zmj2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:31:10 crc kubenswrapper[4689]: I1210 12:31:10.527573 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5985c32-65e5-453a-a56e-a95411e80db0-util" (OuterVolumeSpecName: "util") pod "c5985c32-65e5-453a-a56e-a95411e80db0" (UID: "c5985c32-65e5-453a-a56e-a95411e80db0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:31:10 crc kubenswrapper[4689]: I1210 12:31:10.613525 4689 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5985c32-65e5-453a-a56e-a95411e80db0-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:31:10 crc kubenswrapper[4689]: I1210 12:31:10.613565 4689 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5985c32-65e5-453a-a56e-a95411e80db0-util\") on node \"crc\" DevicePath \"\"" Dec 10 12:31:10 crc kubenswrapper[4689]: I1210 12:31:10.613575 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmj2j\" (UniqueName: \"kubernetes.io/projected/c5985c32-65e5-453a-a56e-a95411e80db0-kube-api-access-zmj2j\") on node \"crc\" DevicePath \"\"" Dec 10 12:31:10 crc kubenswrapper[4689]: I1210 12:31:10.785862 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xr8zc" Dec 10 12:31:10 crc kubenswrapper[4689]: I1210 12:31:10.786211 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xr8zc" Dec 10 12:31:10 crc kubenswrapper[4689]: I1210 12:31:10.856375 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xr8zc" Dec 10 12:31:11 crc kubenswrapper[4689]: I1210 12:31:11.057175 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45" Dec 10 12:31:11 crc kubenswrapper[4689]: I1210 12:31:11.057218 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45" event={"ID":"c5985c32-65e5-453a-a56e-a95411e80db0","Type":"ContainerDied","Data":"efa164461f2eb5f00afdd0c33ebf2479ca421a93201e66a8a99e42e12f1529a9"} Dec 10 12:31:11 crc kubenswrapper[4689]: I1210 12:31:11.057260 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efa164461f2eb5f00afdd0c33ebf2479ca421a93201e66a8a99e42e12f1529a9" Dec 10 12:31:11 crc kubenswrapper[4689]: I1210 12:31:11.105075 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xr8zc" Dec 10 12:31:13 crc kubenswrapper[4689]: I1210 12:31:13.235403 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xr8zc"] Dec 10 12:31:13 crc kubenswrapper[4689]: I1210 12:31:13.610147 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6476f95cff-ktc7c"] Dec 10 12:31:13 crc kubenswrapper[4689]: E1210 12:31:13.610423 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5985c32-65e5-453a-a56e-a95411e80db0" containerName="pull" Dec 10 12:31:13 crc kubenswrapper[4689]: I1210 12:31:13.610438 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5985c32-65e5-453a-a56e-a95411e80db0" containerName="pull" Dec 10 12:31:13 crc kubenswrapper[4689]: E1210 12:31:13.610449 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5985c32-65e5-453a-a56e-a95411e80db0" containerName="extract" Dec 10 12:31:13 crc kubenswrapper[4689]: I1210 12:31:13.610457 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5985c32-65e5-453a-a56e-a95411e80db0" containerName="extract" Dec 10 12:31:13 crc kubenswrapper[4689]: E1210 12:31:13.610486 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5985c32-65e5-453a-a56e-a95411e80db0" containerName="util" Dec 10 12:31:13 crc kubenswrapper[4689]: I1210 12:31:13.610495 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5985c32-65e5-453a-a56e-a95411e80db0" containerName="util" Dec 10 12:31:13 crc kubenswrapper[4689]: I1210 12:31:13.610637 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5985c32-65e5-453a-a56e-a95411e80db0" containerName="extract" Dec 10 12:31:13 crc kubenswrapper[4689]: I1210 12:31:13.611155 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6476f95cff-ktc7c" Dec 10 12:31:13 crc kubenswrapper[4689]: I1210 12:31:13.613764 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-cw6qp" Dec 10 12:31:13 crc kubenswrapper[4689]: I1210 12:31:13.630080 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6476f95cff-ktc7c"] Dec 10 12:31:13 crc kubenswrapper[4689]: I1210 12:31:13.751901 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7prrn\" (UniqueName: \"kubernetes.io/projected/a138024c-6885-4a4b-abc6-e4cec00348d6-kube-api-access-7prrn\") pod \"openstack-operator-controller-operator-6476f95cff-ktc7c\" (UID: \"a138024c-6885-4a4b-abc6-e4cec00348d6\") " pod="openstack-operators/openstack-operator-controller-operator-6476f95cff-ktc7c" Dec 10 12:31:13 crc kubenswrapper[4689]: I1210 12:31:13.853177 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7prrn\" (UniqueName: \"kubernetes.io/projected/a138024c-6885-4a4b-abc6-e4cec00348d6-kube-api-access-7prrn\") pod \"openstack-operator-controller-operator-6476f95cff-ktc7c\" (UID: \"a138024c-6885-4a4b-abc6-e4cec00348d6\") " pod="openstack-operators/openstack-operator-controller-operator-6476f95cff-ktc7c" Dec 10 12:31:13 crc kubenswrapper[4689]: I1210 12:31:13.877754 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7prrn\" (UniqueName: \"kubernetes.io/projected/a138024c-6885-4a4b-abc6-e4cec00348d6-kube-api-access-7prrn\") pod \"openstack-operator-controller-operator-6476f95cff-ktc7c\" (UID: \"a138024c-6885-4a4b-abc6-e4cec00348d6\") " pod="openstack-operators/openstack-operator-controller-operator-6476f95cff-ktc7c" Dec 10 12:31:13 crc kubenswrapper[4689]: I1210 12:31:13.936612 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6476f95cff-ktc7c" Dec 10 12:31:14 crc kubenswrapper[4689]: I1210 12:31:14.084440 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xr8zc" podUID="635c2d5a-2f12-46cf-86e2-ec4f689039e8" containerName="registry-server" containerID="cri-o://5d0f4b11999205b5fcdf73d7eacd418042aad9f86eff93803280c1f041a81bcc" gracePeriod=2 Dec 10 12:31:14 crc kubenswrapper[4689]: I1210 12:31:14.408259 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6476f95cff-ktc7c"] Dec 10 12:31:15 crc kubenswrapper[4689]: I1210 12:31:15.096444 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6476f95cff-ktc7c" event={"ID":"a138024c-6885-4a4b-abc6-e4cec00348d6","Type":"ContainerStarted","Data":"19036648516cd1f9b5725f1ff898a41ee7f93f602e91b6c5b1884971465da185"} Dec 10 12:31:15 crc kubenswrapper[4689]: I1210 12:31:15.596129 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xr8zc" Dec 10 12:31:15 crc kubenswrapper[4689]: I1210 12:31:15.676764 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/635c2d5a-2f12-46cf-86e2-ec4f689039e8-catalog-content\") pod \"635c2d5a-2f12-46cf-86e2-ec4f689039e8\" (UID: \"635c2d5a-2f12-46cf-86e2-ec4f689039e8\") " Dec 10 12:31:15 crc kubenswrapper[4689]: I1210 12:31:15.676866 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7n74\" (UniqueName: \"kubernetes.io/projected/635c2d5a-2f12-46cf-86e2-ec4f689039e8-kube-api-access-m7n74\") pod \"635c2d5a-2f12-46cf-86e2-ec4f689039e8\" (UID: \"635c2d5a-2f12-46cf-86e2-ec4f689039e8\") " Dec 10 12:31:15 crc kubenswrapper[4689]: I1210 12:31:15.676887 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/635c2d5a-2f12-46cf-86e2-ec4f689039e8-utilities\") pod \"635c2d5a-2f12-46cf-86e2-ec4f689039e8\" (UID: \"635c2d5a-2f12-46cf-86e2-ec4f689039e8\") " Dec 10 12:31:15 crc kubenswrapper[4689]: I1210 12:31:15.677897 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/635c2d5a-2f12-46cf-86e2-ec4f689039e8-utilities" (OuterVolumeSpecName: "utilities") pod "635c2d5a-2f12-46cf-86e2-ec4f689039e8" (UID: "635c2d5a-2f12-46cf-86e2-ec4f689039e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:31:15 crc kubenswrapper[4689]: I1210 12:31:15.683199 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/635c2d5a-2f12-46cf-86e2-ec4f689039e8-kube-api-access-m7n74" (OuterVolumeSpecName: "kube-api-access-m7n74") pod "635c2d5a-2f12-46cf-86e2-ec4f689039e8" (UID: "635c2d5a-2f12-46cf-86e2-ec4f689039e8"). InnerVolumeSpecName "kube-api-access-m7n74". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:31:15 crc kubenswrapper[4689]: I1210 12:31:15.723053 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/635c2d5a-2f12-46cf-86e2-ec4f689039e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "635c2d5a-2f12-46cf-86e2-ec4f689039e8" (UID: "635c2d5a-2f12-46cf-86e2-ec4f689039e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:31:15 crc kubenswrapper[4689]: I1210 12:31:15.778260 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7n74\" (UniqueName: \"kubernetes.io/projected/635c2d5a-2f12-46cf-86e2-ec4f689039e8-kube-api-access-m7n74\") on node \"crc\" DevicePath \"\"" Dec 10 12:31:15 crc kubenswrapper[4689]: I1210 12:31:15.778292 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/635c2d5a-2f12-46cf-86e2-ec4f689039e8-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 12:31:15 crc kubenswrapper[4689]: I1210 12:31:15.778339 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/635c2d5a-2f12-46cf-86e2-ec4f689039e8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 12:31:16 crc kubenswrapper[4689]: I1210 12:31:16.166218 4689 generic.go:334] "Generic (PLEG): container finished" podID="635c2d5a-2f12-46cf-86e2-ec4f689039e8" containerID="5d0f4b11999205b5fcdf73d7eacd418042aad9f86eff93803280c1f041a81bcc" exitCode=0 Dec 10 12:31:16 crc kubenswrapper[4689]: I1210 12:31:16.166265 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xr8zc" event={"ID":"635c2d5a-2f12-46cf-86e2-ec4f689039e8","Type":"ContainerDied","Data":"5d0f4b11999205b5fcdf73d7eacd418042aad9f86eff93803280c1f041a81bcc"} Dec 10 12:31:16 crc kubenswrapper[4689]: I1210 12:31:16.166294 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xr8zc" event={"ID":"635c2d5a-2f12-46cf-86e2-ec4f689039e8","Type":"ContainerDied","Data":"e02c3b207b6db756fd2f287417f609df042d1c689ec8acc6735827c8ef563ada"} Dec 10 12:31:16 crc kubenswrapper[4689]: I1210 12:31:16.166312 4689 scope.go:117] "RemoveContainer" containerID="5d0f4b11999205b5fcdf73d7eacd418042aad9f86eff93803280c1f041a81bcc" Dec 10 12:31:16 crc kubenswrapper[4689]: I1210 12:31:16.166448 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xr8zc" Dec 10 12:31:16 crc kubenswrapper[4689]: I1210 12:31:16.201375 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xr8zc"] Dec 10 12:31:16 crc kubenswrapper[4689]: I1210 12:31:16.208556 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xr8zc"] Dec 10 12:31:16 crc kubenswrapper[4689]: I1210 12:31:16.210284 4689 scope.go:117] "RemoveContainer" containerID="5733d1666a70659aafc423d534a21779afedc3e0c66841f5e087dc9410669e3f" Dec 10 12:31:16 crc kubenswrapper[4689]: I1210 12:31:16.234130 4689 scope.go:117] "RemoveContainer" containerID="03302c78147c5d38086982fd9d21a4e814355e5c72656260819d0782a19557cc" Dec 10 12:31:16 crc kubenswrapper[4689]: I1210 12:31:16.258246 4689 scope.go:117] "RemoveContainer" containerID="5d0f4b11999205b5fcdf73d7eacd418042aad9f86eff93803280c1f041a81bcc" Dec 10 12:31:16 crc kubenswrapper[4689]: E1210 12:31:16.259368 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d0f4b11999205b5fcdf73d7eacd418042aad9f86eff93803280c1f041a81bcc\": container with ID starting with 5d0f4b11999205b5fcdf73d7eacd418042aad9f86eff93803280c1f041a81bcc not found: ID does not exist" containerID="5d0f4b11999205b5fcdf73d7eacd418042aad9f86eff93803280c1f041a81bcc" Dec 10 12:31:16 crc kubenswrapper[4689]: I1210 12:31:16.259421 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d0f4b11999205b5fcdf73d7eacd418042aad9f86eff93803280c1f041a81bcc"} err="failed to get container status \"5d0f4b11999205b5fcdf73d7eacd418042aad9f86eff93803280c1f041a81bcc\": rpc error: code = NotFound desc = could not find container \"5d0f4b11999205b5fcdf73d7eacd418042aad9f86eff93803280c1f041a81bcc\": container with ID starting with 5d0f4b11999205b5fcdf73d7eacd418042aad9f86eff93803280c1f041a81bcc not found: ID does not exist" Dec 10 12:31:16 crc kubenswrapper[4689]: I1210 12:31:16.259454 4689 scope.go:117] "RemoveContainer" containerID="5733d1666a70659aafc423d534a21779afedc3e0c66841f5e087dc9410669e3f" Dec 10 12:31:16 crc kubenswrapper[4689]: E1210 12:31:16.262132 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5733d1666a70659aafc423d534a21779afedc3e0c66841f5e087dc9410669e3f\": container with ID starting with 5733d1666a70659aafc423d534a21779afedc3e0c66841f5e087dc9410669e3f not found: ID does not exist" containerID="5733d1666a70659aafc423d534a21779afedc3e0c66841f5e087dc9410669e3f" Dec 10 12:31:16 crc kubenswrapper[4689]: I1210 12:31:16.265118 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5733d1666a70659aafc423d534a21779afedc3e0c66841f5e087dc9410669e3f"} err="failed to get container status \"5733d1666a70659aafc423d534a21779afedc3e0c66841f5e087dc9410669e3f\": rpc error: code = NotFound desc = could not find container \"5733d1666a70659aafc423d534a21779afedc3e0c66841f5e087dc9410669e3f\": container with ID starting with 5733d1666a70659aafc423d534a21779afedc3e0c66841f5e087dc9410669e3f not found: ID does not exist" Dec 10 12:31:16 crc kubenswrapper[4689]: I1210 12:31:16.265182 4689 scope.go:117] "RemoveContainer" containerID="03302c78147c5d38086982fd9d21a4e814355e5c72656260819d0782a19557cc" Dec 10 12:31:16 crc kubenswrapper[4689]: E1210 12:31:16.265869 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03302c78147c5d38086982fd9d21a4e814355e5c72656260819d0782a19557cc\": container with ID starting with 03302c78147c5d38086982fd9d21a4e814355e5c72656260819d0782a19557cc not found: ID does not exist" containerID="03302c78147c5d38086982fd9d21a4e814355e5c72656260819d0782a19557cc" Dec 10 12:31:16 crc kubenswrapper[4689]: I1210 12:31:16.265910 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03302c78147c5d38086982fd9d21a4e814355e5c72656260819d0782a19557cc"} err="failed to get container status \"03302c78147c5d38086982fd9d21a4e814355e5c72656260819d0782a19557cc\": rpc error: code = NotFound desc = could not find container \"03302c78147c5d38086982fd9d21a4e814355e5c72656260819d0782a19557cc\": container with ID starting with 03302c78147c5d38086982fd9d21a4e814355e5c72656260819d0782a19557cc not found: ID does not exist" Dec 10 12:31:16 crc kubenswrapper[4689]: I1210 12:31:16.504395 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="635c2d5a-2f12-46cf-86e2-ec4f689039e8" path="/var/lib/kubelet/pods/635c2d5a-2f12-46cf-86e2-ec4f689039e8/volumes" Dec 10 12:31:20 crc kubenswrapper[4689]: I1210 12:31:20.197055 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6476f95cff-ktc7c" event={"ID":"a138024c-6885-4a4b-abc6-e4cec00348d6","Type":"ContainerStarted","Data":"1102bc02bdd83b0e1bc2452d5335caeb080af92c4341a127c485a7bcf7448bf7"} Dec 10 12:31:20 crc kubenswrapper[4689]: I1210 12:31:20.197214 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-6476f95cff-ktc7c" Dec 10 12:31:20 crc kubenswrapper[4689]: I1210 12:31:20.244502 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-6476f95cff-ktc7c" podStartSLOduration=2.529889814 podStartE2EDuration="7.244478476s" podCreationTimestamp="2025-12-10 12:31:13 +0000 UTC" firstStartedPulling="2025-12-10 12:31:14.4366815 +0000 UTC m=+942.224762638" lastFinishedPulling="2025-12-10 12:31:19.151270152 +0000 UTC m=+946.939351300" observedRunningTime="2025-12-10 12:31:20.236430144 +0000 UTC m=+948.024511312" watchObservedRunningTime="2025-12-10 12:31:20.244478476 +0000 UTC m=+948.032559644" Dec 10 12:31:23 crc kubenswrapper[4689]: I1210 12:31:23.050969 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tsbrb"] Dec 10 12:31:23 crc kubenswrapper[4689]: E1210 12:31:23.051640 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="635c2d5a-2f12-46cf-86e2-ec4f689039e8" containerName="extract-content" Dec 10 12:31:23 crc kubenswrapper[4689]: I1210 12:31:23.051661 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="635c2d5a-2f12-46cf-86e2-ec4f689039e8" containerName="extract-content" Dec 10 12:31:23 crc kubenswrapper[4689]: E1210 12:31:23.051699 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="635c2d5a-2f12-46cf-86e2-ec4f689039e8" containerName="extract-utilities" Dec 10 12:31:23 crc kubenswrapper[4689]: I1210 12:31:23.051713 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="635c2d5a-2f12-46cf-86e2-ec4f689039e8" containerName="extract-utilities" Dec 10 12:31:23 crc kubenswrapper[4689]: E1210 12:31:23.051737 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="635c2d5a-2f12-46cf-86e2-ec4f689039e8" containerName="registry-server" Dec 10 12:31:23 crc kubenswrapper[4689]: I1210 12:31:23.051750 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="635c2d5a-2f12-46cf-86e2-ec4f689039e8" containerName="registry-server" Dec 10 12:31:23 crc kubenswrapper[4689]: I1210 12:31:23.051953 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="635c2d5a-2f12-46cf-86e2-ec4f689039e8" containerName="registry-server" Dec 10 12:31:23 crc kubenswrapper[4689]: I1210 12:31:23.053702 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tsbrb" Dec 10 12:31:23 crc kubenswrapper[4689]: I1210 12:31:23.065073 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tsbrb"] Dec 10 12:31:23 crc kubenswrapper[4689]: I1210 12:31:23.172277 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f39a741-058f-4eeb-96e1-fb5fed6aea30-catalog-content\") pod \"redhat-marketplace-tsbrb\" (UID: \"1f39a741-058f-4eeb-96e1-fb5fed6aea30\") " pod="openshift-marketplace/redhat-marketplace-tsbrb" Dec 10 12:31:23 crc kubenswrapper[4689]: I1210 12:31:23.172354 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw5f4\" (UniqueName: \"kubernetes.io/projected/1f39a741-058f-4eeb-96e1-fb5fed6aea30-kube-api-access-cw5f4\") pod \"redhat-marketplace-tsbrb\" (UID: \"1f39a741-058f-4eeb-96e1-fb5fed6aea30\") " pod="openshift-marketplace/redhat-marketplace-tsbrb" Dec 10 12:31:23 crc kubenswrapper[4689]: I1210 12:31:23.172547 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f39a741-058f-4eeb-96e1-fb5fed6aea30-utilities\") pod \"redhat-marketplace-tsbrb\" (UID: \"1f39a741-058f-4eeb-96e1-fb5fed6aea30\") " pod="openshift-marketplace/redhat-marketplace-tsbrb" Dec 10 12:31:23 crc kubenswrapper[4689]: I1210 12:31:23.274219 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f39a741-058f-4eeb-96e1-fb5fed6aea30-utilities\") pod \"redhat-marketplace-tsbrb\" (UID: \"1f39a741-058f-4eeb-96e1-fb5fed6aea30\") " pod="openshift-marketplace/redhat-marketplace-tsbrb" Dec 10 12:31:23 crc kubenswrapper[4689]: I1210 12:31:23.274272 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f39a741-058f-4eeb-96e1-fb5fed6aea30-catalog-content\") pod \"redhat-marketplace-tsbrb\" (UID: \"1f39a741-058f-4eeb-96e1-fb5fed6aea30\") " pod="openshift-marketplace/redhat-marketplace-tsbrb" Dec 10 12:31:23 crc kubenswrapper[4689]: I1210 12:31:23.274306 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw5f4\" (UniqueName: \"kubernetes.io/projected/1f39a741-058f-4eeb-96e1-fb5fed6aea30-kube-api-access-cw5f4\") pod \"redhat-marketplace-tsbrb\" (UID: \"1f39a741-058f-4eeb-96e1-fb5fed6aea30\") " pod="openshift-marketplace/redhat-marketplace-tsbrb" Dec 10 12:31:23 crc kubenswrapper[4689]: I1210 12:31:23.274848 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f39a741-058f-4eeb-96e1-fb5fed6aea30-utilities\") pod \"redhat-marketplace-tsbrb\" (UID: \"1f39a741-058f-4eeb-96e1-fb5fed6aea30\") " pod="openshift-marketplace/redhat-marketplace-tsbrb" Dec 10 12:31:23 crc kubenswrapper[4689]: I1210 12:31:23.274942 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f39a741-058f-4eeb-96e1-fb5fed6aea30-catalog-content\") pod \"redhat-marketplace-tsbrb\" (UID: \"1f39a741-058f-4eeb-96e1-fb5fed6aea30\") " pod="openshift-marketplace/redhat-marketplace-tsbrb" Dec 10 12:31:23 crc kubenswrapper[4689]: I1210 12:31:23.299135 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw5f4\" (UniqueName: \"kubernetes.io/projected/1f39a741-058f-4eeb-96e1-fb5fed6aea30-kube-api-access-cw5f4\") pod \"redhat-marketplace-tsbrb\" (UID: \"1f39a741-058f-4eeb-96e1-fb5fed6aea30\") " pod="openshift-marketplace/redhat-marketplace-tsbrb" Dec 10 12:31:23 crc kubenswrapper[4689]: I1210 12:31:23.385073 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tsbrb" Dec 10 12:31:23 crc kubenswrapper[4689]: I1210 12:31:23.811673 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tsbrb"] Dec 10 12:31:23 crc kubenswrapper[4689]: W1210 12:31:23.836273 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f39a741_058f_4eeb_96e1_fb5fed6aea30.slice/crio-217f893569d45109b4baba76f5fdea899352623552f5423310b9c80ac6257611 WatchSource:0}: Error finding container 217f893569d45109b4baba76f5fdea899352623552f5423310b9c80ac6257611: Status 404 returned error can't find the container with id 217f893569d45109b4baba76f5fdea899352623552f5423310b9c80ac6257611 Dec 10 12:31:24 crc kubenswrapper[4689]: I1210 12:31:24.222632 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tsbrb" event={"ID":"1f39a741-058f-4eeb-96e1-fb5fed6aea30","Type":"ContainerStarted","Data":"217f893569d45109b4baba76f5fdea899352623552f5423310b9c80ac6257611"} Dec 10 12:31:25 crc kubenswrapper[4689]: I1210 12:31:25.254115 4689 generic.go:334] "Generic (PLEG): container finished" podID="1f39a741-058f-4eeb-96e1-fb5fed6aea30" containerID="2c23415cbcbba7243808944ff05ee9e1312e1c78e0729e442151599d8075f226" exitCode=0 Dec 10 12:31:25 crc kubenswrapper[4689]: I1210 12:31:25.254173 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tsbrb" event={"ID":"1f39a741-058f-4eeb-96e1-fb5fed6aea30","Type":"ContainerDied","Data":"2c23415cbcbba7243808944ff05ee9e1312e1c78e0729e442151599d8075f226"} Dec 10 12:31:26 crc kubenswrapper[4689]: I1210 12:31:26.262024 4689 generic.go:334] "Generic (PLEG): container finished" podID="1f39a741-058f-4eeb-96e1-fb5fed6aea30" containerID="3013337c5bcabdc15d9c234d0fe39ff61a5ec17bbc26af800f2fe1ad7cb9f641" exitCode=0 Dec 10 12:31:26 crc kubenswrapper[4689]: I1210 12:31:26.262087 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tsbrb" event={"ID":"1f39a741-058f-4eeb-96e1-fb5fed6aea30","Type":"ContainerDied","Data":"3013337c5bcabdc15d9c234d0fe39ff61a5ec17bbc26af800f2fe1ad7cb9f641"} Dec 10 12:31:27 crc kubenswrapper[4689]: I1210 12:31:27.271845 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tsbrb" event={"ID":"1f39a741-058f-4eeb-96e1-fb5fed6aea30","Type":"ContainerStarted","Data":"eab2a361aa3e513de5a672973c71ac5de8a6893109fe8eb8f43e20fb4e99fe61"} Dec 10 12:31:27 crc kubenswrapper[4689]: I1210 12:31:27.295212 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tsbrb" podStartSLOduration=2.745101645 podStartE2EDuration="4.295195281s" podCreationTimestamp="2025-12-10 12:31:23 +0000 UTC" firstStartedPulling="2025-12-10 12:31:25.256380677 +0000 UTC m=+953.044461815" lastFinishedPulling="2025-12-10 12:31:26.806474303 +0000 UTC m=+954.594555451" observedRunningTime="2025-12-10 12:31:27.290480232 +0000 UTC m=+955.078561390" watchObservedRunningTime="2025-12-10 12:31:27.295195281 +0000 UTC m=+955.083276419" Dec 10 12:31:33 crc kubenswrapper[4689]: I1210 12:31:33.385893 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tsbrb" Dec 10 12:31:33 crc kubenswrapper[4689]: I1210 12:31:33.386438 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tsbrb" Dec 10 12:31:33 crc kubenswrapper[4689]: I1210 12:31:33.439004 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tsbrb" Dec 10 12:31:33 crc kubenswrapper[4689]: I1210 12:31:33.939611 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-6476f95cff-ktc7c" Dec 10 12:31:34 crc kubenswrapper[4689]: I1210 12:31:34.415233 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tsbrb" Dec 10 12:31:34 crc kubenswrapper[4689]: I1210 12:31:34.470518 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tsbrb"] Dec 10 12:31:36 crc kubenswrapper[4689]: I1210 12:31:36.367572 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tsbrb" podUID="1f39a741-058f-4eeb-96e1-fb5fed6aea30" containerName="registry-server" containerID="cri-o://eab2a361aa3e513de5a672973c71ac5de8a6893109fe8eb8f43e20fb4e99fe61" gracePeriod=2 Dec 10 12:31:37 crc kubenswrapper[4689]: E1210 12:31:37.106133 4689 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f39a741_058f_4eeb_96e1_fb5fed6aea30.slice/crio-conmon-eab2a361aa3e513de5a672973c71ac5de8a6893109fe8eb8f43e20fb4e99fe61.scope\": RecentStats: unable to find data in memory cache]" Dec 10 12:31:37 crc kubenswrapper[4689]: I1210 12:31:37.166687 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 12:31:37 crc kubenswrapper[4689]: I1210 12:31:37.167042 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 12:31:37 crc kubenswrapper[4689]: I1210 12:31:37.167288 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" Dec 10 12:31:37 crc kubenswrapper[4689]: I1210 12:31:37.168207 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6e5c15a4c10b86079bc45e52f2bd74ade92056d772116ec22ae9d0a1a5a11fd9"} pod="openshift-machine-config-operator/machine-config-daemon-db6zk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 12:31:37 crc kubenswrapper[4689]: I1210 12:31:37.168425 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" containerID="cri-o://6e5c15a4c10b86079bc45e52f2bd74ade92056d772116ec22ae9d0a1a5a11fd9" gracePeriod=600 Dec 10 12:31:37 crc kubenswrapper[4689]: I1210 12:31:37.378348 4689 generic.go:334] "Generic (PLEG): container finished" podID="a41ebdcd-910f-4669-992d-296e1a92162b" containerID="6e5c15a4c10b86079bc45e52f2bd74ade92056d772116ec22ae9d0a1a5a11fd9" exitCode=0 Dec 10 12:31:37 crc kubenswrapper[4689]: I1210 12:31:37.379345 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" event={"ID":"a41ebdcd-910f-4669-992d-296e1a92162b","Type":"ContainerDied","Data":"6e5c15a4c10b86079bc45e52f2bd74ade92056d772116ec22ae9d0a1a5a11fd9"} Dec 10 12:31:37 crc kubenswrapper[4689]: I1210 12:31:37.379389 4689 scope.go:117] "RemoveContainer" containerID="1d20271d773850f4d5b2fbccca8a9391a64d881b36edb8636961a3fdb4367ab8" Dec 10 12:31:37 crc kubenswrapper[4689]: I1210 12:31:37.385908 4689 generic.go:334] "Generic (PLEG): container finished" podID="1f39a741-058f-4eeb-96e1-fb5fed6aea30" containerID="eab2a361aa3e513de5a672973c71ac5de8a6893109fe8eb8f43e20fb4e99fe61" exitCode=0 Dec 10 12:31:37 crc kubenswrapper[4689]: I1210 12:31:37.385936 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tsbrb" event={"ID":"1f39a741-058f-4eeb-96e1-fb5fed6aea30","Type":"ContainerDied","Data":"eab2a361aa3e513de5a672973c71ac5de8a6893109fe8eb8f43e20fb4e99fe61"} Dec 10 12:31:37 crc kubenswrapper[4689]: I1210 12:31:37.913241 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tsbrb" Dec 10 12:31:38 crc kubenswrapper[4689]: I1210 12:31:38.014221 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f39a741-058f-4eeb-96e1-fb5fed6aea30-utilities\") pod \"1f39a741-058f-4eeb-96e1-fb5fed6aea30\" (UID: \"1f39a741-058f-4eeb-96e1-fb5fed6aea30\") " Dec 10 12:31:38 crc kubenswrapper[4689]: I1210 12:31:38.014295 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cw5f4\" (UniqueName: \"kubernetes.io/projected/1f39a741-058f-4eeb-96e1-fb5fed6aea30-kube-api-access-cw5f4\") pod \"1f39a741-058f-4eeb-96e1-fb5fed6aea30\" (UID: \"1f39a741-058f-4eeb-96e1-fb5fed6aea30\") " Dec 10 12:31:38 crc kubenswrapper[4689]: I1210 12:31:38.014350 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f39a741-058f-4eeb-96e1-fb5fed6aea30-catalog-content\") pod \"1f39a741-058f-4eeb-96e1-fb5fed6aea30\" (UID: \"1f39a741-058f-4eeb-96e1-fb5fed6aea30\") " Dec 10 12:31:38 crc kubenswrapper[4689]: I1210 12:31:38.015326 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f39a741-058f-4eeb-96e1-fb5fed6aea30-utilities" (OuterVolumeSpecName: "utilities") pod "1f39a741-058f-4eeb-96e1-fb5fed6aea30" (UID: "1f39a741-058f-4eeb-96e1-fb5fed6aea30"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:31:38 crc kubenswrapper[4689]: I1210 12:31:38.020695 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f39a741-058f-4eeb-96e1-fb5fed6aea30-kube-api-access-cw5f4" (OuterVolumeSpecName: "kube-api-access-cw5f4") pod "1f39a741-058f-4eeb-96e1-fb5fed6aea30" (UID: "1f39a741-058f-4eeb-96e1-fb5fed6aea30"). InnerVolumeSpecName "kube-api-access-cw5f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:31:38 crc kubenswrapper[4689]: I1210 12:31:38.044903 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f39a741-058f-4eeb-96e1-fb5fed6aea30-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f39a741-058f-4eeb-96e1-fb5fed6aea30" (UID: "1f39a741-058f-4eeb-96e1-fb5fed6aea30"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:31:38 crc kubenswrapper[4689]: I1210 12:31:38.116054 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cw5f4\" (UniqueName: \"kubernetes.io/projected/1f39a741-058f-4eeb-96e1-fb5fed6aea30-kube-api-access-cw5f4\") on node \"crc\" DevicePath \"\"" Dec 10 12:31:38 crc kubenswrapper[4689]: I1210 12:31:38.116125 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f39a741-058f-4eeb-96e1-fb5fed6aea30-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 12:31:38 crc kubenswrapper[4689]: I1210 12:31:38.116882 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f39a741-058f-4eeb-96e1-fb5fed6aea30-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 12:31:38 crc kubenswrapper[4689]: I1210 12:31:38.395303 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tsbrb" event={"ID":"1f39a741-058f-4eeb-96e1-fb5fed6aea30","Type":"ContainerDied","Data":"217f893569d45109b4baba76f5fdea899352623552f5423310b9c80ac6257611"} Dec 10 12:31:38 crc kubenswrapper[4689]: I1210 12:31:38.395362 4689 scope.go:117] "RemoveContainer" containerID="eab2a361aa3e513de5a672973c71ac5de8a6893109fe8eb8f43e20fb4e99fe61" Dec 10 12:31:38 crc kubenswrapper[4689]: I1210 12:31:38.395656 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tsbrb" Dec 10 12:31:38 crc kubenswrapper[4689]: I1210 12:31:38.398206 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" event={"ID":"a41ebdcd-910f-4669-992d-296e1a92162b","Type":"ContainerStarted","Data":"bf7bbf9875a5b9cc37e5a62ace29b6dd6e4de888067fb82c65a8956ea2149bad"} Dec 10 12:31:38 crc kubenswrapper[4689]: I1210 12:31:38.422125 4689 scope.go:117] "RemoveContainer" containerID="3013337c5bcabdc15d9c234d0fe39ff61a5ec17bbc26af800f2fe1ad7cb9f641" Dec 10 12:31:38 crc kubenswrapper[4689]: I1210 12:31:38.445723 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tsbrb"] Dec 10 12:31:38 crc kubenswrapper[4689]: I1210 12:31:38.449401 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tsbrb"] Dec 10 12:31:38 crc kubenswrapper[4689]: I1210 12:31:38.463146 4689 scope.go:117] "RemoveContainer" containerID="2c23415cbcbba7243808944ff05ee9e1312e1c78e0729e442151599d8075f226" Dec 10 12:31:38 crc kubenswrapper[4689]: I1210 12:31:38.515877 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f39a741-058f-4eeb-96e1-fb5fed6aea30" path="/var/lib/kubelet/pods/1f39a741-058f-4eeb-96e1-fb5fed6aea30/volumes" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.461427 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-46w9f"] Dec 10 12:31:53 crc kubenswrapper[4689]: E1210 12:31:53.462262 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f39a741-058f-4eeb-96e1-fb5fed6aea30" containerName="extract-content" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.462278 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f39a741-058f-4eeb-96e1-fb5fed6aea30" containerName="extract-content" Dec 10 12:31:53 crc kubenswrapper[4689]: E1210 12:31:53.462295 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f39a741-058f-4eeb-96e1-fb5fed6aea30" containerName="registry-server" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.462305 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f39a741-058f-4eeb-96e1-fb5fed6aea30" containerName="registry-server" Dec 10 12:31:53 crc kubenswrapper[4689]: E1210 12:31:53.462318 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f39a741-058f-4eeb-96e1-fb5fed6aea30" containerName="extract-utilities" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.462328 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f39a741-058f-4eeb-96e1-fb5fed6aea30" containerName="extract-utilities" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.462481 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f39a741-058f-4eeb-96e1-fb5fed6aea30" containerName="registry-server" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.463225 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-46w9f" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.465518 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-pbthr" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.472327 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-4hdrw"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.473345 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4hdrw" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.475555 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-wglv2" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.481204 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-46w9f"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.493863 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-dtclt"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.495515 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-dtclt" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.504221 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-4d2rf" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.506467 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-4hdrw"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.526920 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-dtclt"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.536055 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-tzbn5"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.537080 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-tzbn5" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.540502 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-6qs2n" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.555502 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6fmnw"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.556755 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6fmnw" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.564500 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-72hrc" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.566854 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-tzbn5"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.572404 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6fmnw"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.589015 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gk2zb"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.589942 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gk2zb" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.594346 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-zj942" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.601789 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gk2zb"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.608593 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-lkvwb"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.609603 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lkvwb" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.617225 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-g7jz9" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.617377 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.621365 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pm92\" (UniqueName: \"kubernetes.io/projected/e6db2b03-cb28-4161-bae6-6eecce28c871-kube-api-access-2pm92\") pod \"cinder-operator-controller-manager-6c677c69b-4hdrw\" (UID: \"e6db2b03-cb28-4161-bae6-6eecce28c871\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4hdrw" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.621418 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd526\" (UniqueName: \"kubernetes.io/projected/ec9c74bb-c8dc-409b-817c-74963a395df8-kube-api-access-qd526\") pod \"barbican-operator-controller-manager-7d9dfd778-46w9f\" (UID: \"ec9c74bb-c8dc-409b-817c-74963a395df8\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-46w9f" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.621457 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb4jg\" (UniqueName: \"kubernetes.io/projected/0b413153-162d-46ad-9b9a-b44869127ee7-kube-api-access-wb4jg\") pod \"designate-operator-controller-manager-697fb699cf-dtclt\" (UID: \"0b413153-162d-46ad-9b9a-b44869127ee7\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-dtclt" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.626753 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-lkvwb"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.634813 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69f4484999-lb8n4"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.636020 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-69f4484999-lb8n4" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.640365 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-59fd99cc6f-gdgcj"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.642439 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-59fd99cc6f-gdgcj" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.643481 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-n69ks"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.644586 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-f8746" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.644898 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-n69ks" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.646824 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-784td" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.647326 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-2hfpl" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.647775 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69f4484999-lb8n4"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.657786 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-59fd99cc6f-gdgcj"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.662599 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-n69ks"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.671941 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-rwg47"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.673177 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-rwg47" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.679563 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-9g828" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.696365 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-rwg47"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.726571 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pm92\" (UniqueName: \"kubernetes.io/projected/e6db2b03-cb28-4161-bae6-6eecce28c871-kube-api-access-2pm92\") pod \"cinder-operator-controller-manager-6c677c69b-4hdrw\" (UID: \"e6db2b03-cb28-4161-bae6-6eecce28c871\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4hdrw" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.728333 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kqld\" (UniqueName: \"kubernetes.io/projected/13ec50ac-3e46-4615-88f9-070c7a647158-kube-api-access-6kqld\") pod \"infra-operator-controller-manager-78d48bff9d-lkvwb\" (UID: \"13ec50ac-3e46-4615-88f9-070c7a647158\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lkvwb" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.728385 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg2pn\" (UniqueName: \"kubernetes.io/projected/e1c471e3-8ecc-4db9-95c0-a4a13e287aba-kube-api-access-lg2pn\") pod \"ironic-operator-controller-manager-69f4484999-lb8n4\" (UID: \"e1c471e3-8ecc-4db9-95c0-a4a13e287aba\") " pod="openstack-operators/ironic-operator-controller-manager-69f4484999-lb8n4" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.728419 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmhhs\" (UniqueName: \"kubernetes.io/projected/c346bbde-239c-4f76-91da-c4116ad0a487-kube-api-access-rmhhs\") pod \"keystone-operator-controller-manager-59fd99cc6f-gdgcj\" (UID: \"c346bbde-239c-4f76-91da-c4116ad0a487\") " pod="openstack-operators/keystone-operator-controller-manager-59fd99cc6f-gdgcj" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.728450 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13ec50ac-3e46-4615-88f9-070c7a647158-cert\") pod \"infra-operator-controller-manager-78d48bff9d-lkvwb\" (UID: \"13ec50ac-3e46-4615-88f9-070c7a647158\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lkvwb" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.728474 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzp4z\" (UniqueName: \"kubernetes.io/projected/670989a1-8b21-473a-8624-862930a7d70b-kube-api-access-hzp4z\") pod \"glance-operator-controller-manager-5697bb5779-tzbn5\" (UID: \"670989a1-8b21-473a-8624-862930a7d70b\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-tzbn5" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.728494 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd526\" (UniqueName: \"kubernetes.io/projected/ec9c74bb-c8dc-409b-817c-74963a395df8-kube-api-access-qd526\") pod \"barbican-operator-controller-manager-7d9dfd778-46w9f\" (UID: \"ec9c74bb-c8dc-409b-817c-74963a395df8\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-46w9f" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.728530 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bvsx\" (UniqueName: \"kubernetes.io/projected/e9f4ae72-b49e-4144-a49b-72c2bbd1b77c-kube-api-access-8bvsx\") pod \"heat-operator-controller-manager-5f64f6f8bb-6fmnw\" (UID: \"e9f4ae72-b49e-4144-a49b-72c2bbd1b77c\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6fmnw" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.728581 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb4jg\" (UniqueName: \"kubernetes.io/projected/0b413153-162d-46ad-9b9a-b44869127ee7-kube-api-access-wb4jg\") pod \"designate-operator-controller-manager-697fb699cf-dtclt\" (UID: \"0b413153-162d-46ad-9b9a-b44869127ee7\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-dtclt" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.728618 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz7c6\" (UniqueName: \"kubernetes.io/projected/8209377b-970c-4faf-ac5b-1e429d2bdccd-kube-api-access-wz7c6\") pod \"mariadb-operator-controller-manager-79c8c4686c-rwg47\" (UID: \"8209377b-970c-4faf-ac5b-1e429d2bdccd\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-rwg47" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.728663 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td8pt\" (UniqueName: \"kubernetes.io/projected/4dc52122-5456-453a-9d5f-d2fce910bb61-kube-api-access-td8pt\") pod \"manila-operator-controller-manager-5b5fd79c9c-n69ks\" (UID: \"4dc52122-5456-453a-9d5f-d2fce910bb61\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-n69ks" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.728701 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj9pb\" (UniqueName: \"kubernetes.io/projected/c70e0866-b017-4945-9e4b-c69eec327948-kube-api-access-pj9pb\") pod \"horizon-operator-controller-manager-68c6d99b8f-gk2zb\" (UID: \"c70e0866-b017-4945-9e4b-c69eec327948\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gk2zb" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.762260 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb4jg\" (UniqueName: \"kubernetes.io/projected/0b413153-162d-46ad-9b9a-b44869127ee7-kube-api-access-wb4jg\") pod \"designate-operator-controller-manager-697fb699cf-dtclt\" (UID: \"0b413153-162d-46ad-9b9a-b44869127ee7\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-dtclt" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.770846 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd526\" (UniqueName: \"kubernetes.io/projected/ec9c74bb-c8dc-409b-817c-74963a395df8-kube-api-access-qd526\") pod \"barbican-operator-controller-manager-7d9dfd778-46w9f\" (UID: \"ec9c74bb-c8dc-409b-817c-74963a395df8\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-46w9f" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.773345 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pm92\" (UniqueName: \"kubernetes.io/projected/e6db2b03-cb28-4161-bae6-6eecce28c871-kube-api-access-2pm92\") pod \"cinder-operator-controller-manager-6c677c69b-4hdrw\" (UID: \"e6db2b03-cb28-4161-bae6-6eecce28c871\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4hdrw" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.793553 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-46w9f" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.799294 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4hdrw" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.813622 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-66xh9"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.814656 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-66xh9" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.825128 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-dtclt" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.828142 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-ptcp4" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.829475 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td8pt\" (UniqueName: \"kubernetes.io/projected/4dc52122-5456-453a-9d5f-d2fce910bb61-kube-api-access-td8pt\") pod \"manila-operator-controller-manager-5b5fd79c9c-n69ks\" (UID: \"4dc52122-5456-453a-9d5f-d2fce910bb61\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-n69ks" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.829866 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj9pb\" (UniqueName: \"kubernetes.io/projected/c70e0866-b017-4945-9e4b-c69eec327948-kube-api-access-pj9pb\") pod \"horizon-operator-controller-manager-68c6d99b8f-gk2zb\" (UID: \"c70e0866-b017-4945-9e4b-c69eec327948\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gk2zb" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.829914 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kqld\" (UniqueName: \"kubernetes.io/projected/13ec50ac-3e46-4615-88f9-070c7a647158-kube-api-access-6kqld\") pod \"infra-operator-controller-manager-78d48bff9d-lkvwb\" (UID: \"13ec50ac-3e46-4615-88f9-070c7a647158\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lkvwb" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.829939 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg2pn\" (UniqueName: \"kubernetes.io/projected/e1c471e3-8ecc-4db9-95c0-a4a13e287aba-kube-api-access-lg2pn\") pod \"ironic-operator-controller-manager-69f4484999-lb8n4\" (UID: \"e1c471e3-8ecc-4db9-95c0-a4a13e287aba\") " pod="openstack-operators/ironic-operator-controller-manager-69f4484999-lb8n4" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.829967 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmhhs\" (UniqueName: \"kubernetes.io/projected/c346bbde-239c-4f76-91da-c4116ad0a487-kube-api-access-rmhhs\") pod \"keystone-operator-controller-manager-59fd99cc6f-gdgcj\" (UID: \"c346bbde-239c-4f76-91da-c4116ad0a487\") " pod="openstack-operators/keystone-operator-controller-manager-59fd99cc6f-gdgcj" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.830016 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13ec50ac-3e46-4615-88f9-070c7a647158-cert\") pod \"infra-operator-controller-manager-78d48bff9d-lkvwb\" (UID: \"13ec50ac-3e46-4615-88f9-070c7a647158\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lkvwb" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.830044 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzp4z\" (UniqueName: \"kubernetes.io/projected/670989a1-8b21-473a-8624-862930a7d70b-kube-api-access-hzp4z\") pod \"glance-operator-controller-manager-5697bb5779-tzbn5\" (UID: \"670989a1-8b21-473a-8624-862930a7d70b\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-tzbn5" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.830077 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpt6x\" (UniqueName: \"kubernetes.io/projected/7ca7fc40-0bb8-402f-9e73-d1d267340b28-kube-api-access-bpt6x\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-66xh9\" (UID: \"7ca7fc40-0bb8-402f-9e73-d1d267340b28\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-66xh9" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.830109 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bvsx\" (UniqueName: \"kubernetes.io/projected/e9f4ae72-b49e-4144-a49b-72c2bbd1b77c-kube-api-access-8bvsx\") pod \"heat-operator-controller-manager-5f64f6f8bb-6fmnw\" (UID: \"e9f4ae72-b49e-4144-a49b-72c2bbd1b77c\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6fmnw" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.830162 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz7c6\" (UniqueName: \"kubernetes.io/projected/8209377b-970c-4faf-ac5b-1e429d2bdccd-kube-api-access-wz7c6\") pod \"mariadb-operator-controller-manager-79c8c4686c-rwg47\" (UID: \"8209377b-970c-4faf-ac5b-1e429d2bdccd\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-rwg47" Dec 10 12:31:53 crc kubenswrapper[4689]: E1210 12:31:53.831265 4689 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 10 12:31:53 crc kubenswrapper[4689]: E1210 12:31:53.831326 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ec50ac-3e46-4615-88f9-070c7a647158-cert podName:13ec50ac-3e46-4615-88f9-070c7a647158 nodeName:}" failed. No retries permitted until 2025-12-10 12:31:54.331307229 +0000 UTC m=+982.119388367 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/13ec50ac-3e46-4615-88f9-070c7a647158-cert") pod "infra-operator-controller-manager-78d48bff9d-lkvwb" (UID: "13ec50ac-3e46-4615-88f9-070c7a647158") : secret "infra-operator-webhook-server-cert" not found Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.839891 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-qlxtq"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.841221 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-qlxtq" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.848064 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-pwvc2"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.849418 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pwvc2" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.853395 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-66xh9"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.856302 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kqld\" (UniqueName: \"kubernetes.io/projected/13ec50ac-3e46-4615-88f9-070c7a647158-kube-api-access-6kqld\") pod \"infra-operator-controller-manager-78d48bff9d-lkvwb\" (UID: \"13ec50ac-3e46-4615-88f9-070c7a647158\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lkvwb" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.857097 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz7c6\" (UniqueName: \"kubernetes.io/projected/8209377b-970c-4faf-ac5b-1e429d2bdccd-kube-api-access-wz7c6\") pod \"mariadb-operator-controller-manager-79c8c4686c-rwg47\" (UID: \"8209377b-970c-4faf-ac5b-1e429d2bdccd\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-rwg47" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.863698 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-cwrxw" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.863732 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-cbrf9" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.863958 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmhhs\" (UniqueName: \"kubernetes.io/projected/c346bbde-239c-4f76-91da-c4116ad0a487-kube-api-access-rmhhs\") pod \"keystone-operator-controller-manager-59fd99cc6f-gdgcj\" (UID: \"c346bbde-239c-4f76-91da-c4116ad0a487\") " pod="openstack-operators/keystone-operator-controller-manager-59fd99cc6f-gdgcj" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.864418 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td8pt\" (UniqueName: \"kubernetes.io/projected/4dc52122-5456-453a-9d5f-d2fce910bb61-kube-api-access-td8pt\") pod \"manila-operator-controller-manager-5b5fd79c9c-n69ks\" (UID: \"4dc52122-5456-453a-9d5f-d2fce910bb61\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-n69ks" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.867467 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzp4z\" (UniqueName: \"kubernetes.io/projected/670989a1-8b21-473a-8624-862930a7d70b-kube-api-access-hzp4z\") pod \"glance-operator-controller-manager-5697bb5779-tzbn5\" (UID: \"670989a1-8b21-473a-8624-862930a7d70b\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-tzbn5" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.876555 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg2pn\" (UniqueName: \"kubernetes.io/projected/e1c471e3-8ecc-4db9-95c0-a4a13e287aba-kube-api-access-lg2pn\") pod \"ironic-operator-controller-manager-69f4484999-lb8n4\" (UID: \"e1c471e3-8ecc-4db9-95c0-a4a13e287aba\") " pod="openstack-operators/ironic-operator-controller-manager-69f4484999-lb8n4" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.880771 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bvsx\" (UniqueName: \"kubernetes.io/projected/e9f4ae72-b49e-4144-a49b-72c2bbd1b77c-kube-api-access-8bvsx\") pod \"heat-operator-controller-manager-5f64f6f8bb-6fmnw\" (UID: \"e9f4ae72-b49e-4144-a49b-72c2bbd1b77c\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6fmnw" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.888609 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-qlxtq"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.902040 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-pwvc2"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.904203 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj9pb\" (UniqueName: \"kubernetes.io/projected/c70e0866-b017-4945-9e4b-c69eec327948-kube-api-access-pj9pb\") pod \"horizon-operator-controller-manager-68c6d99b8f-gk2zb\" (UID: \"c70e0866-b017-4945-9e4b-c69eec327948\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gk2zb" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.915459 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-v7wvl"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.916876 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v7wvl" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.917625 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gk2zb" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.918430 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-82rtc" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.921136 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fwx7br"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.922280 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fwx7br" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.926736 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.928935 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-v7wvl"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.930543 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfckw\" (UniqueName: \"kubernetes.io/projected/dd93e8ba-afd7-4d03-917f-873352cfefc8-kube-api-access-qfckw\") pod \"nova-operator-controller-manager-697bc559fc-qlxtq\" (UID: \"dd93e8ba-afd7-4d03-917f-873352cfefc8\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-qlxtq" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.930614 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pcjk\" (UniqueName: \"kubernetes.io/projected/8c68b5ee-e36f-428f-8b70-480581f7e120-kube-api-access-5pcjk\") pod \"ovn-operator-controller-manager-b6456fdb6-v7wvl\" (UID: \"8c68b5ee-e36f-428f-8b70-480581f7e120\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v7wvl" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.930646 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fwx7br\" (UID: \"3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fwx7br" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.930667 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-hsw5d" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.932034 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbkgt\" (UniqueName: \"kubernetes.io/projected/04286780-356c-4f76-9168-5a80c36d2aa3-kube-api-access-dbkgt\") pod \"octavia-operator-controller-manager-998648c74-pwvc2\" (UID: \"04286780-356c-4f76-9168-5a80c36d2aa3\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-pwvc2" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.932099 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpt6x\" (UniqueName: \"kubernetes.io/projected/7ca7fc40-0bb8-402f-9e73-d1d267340b28-kube-api-access-bpt6x\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-66xh9\" (UID: \"7ca7fc40-0bb8-402f-9e73-d1d267340b28\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-66xh9" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.932355 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66h9k\" (UniqueName: \"kubernetes.io/projected/3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a-kube-api-access-66h9k\") pod \"openstack-baremetal-operator-controller-manager-84b575879fwx7br\" (UID: \"3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fwx7br" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.936182 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fwx7br"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.943132 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-44v2s"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.950154 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-44v2s"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.950254 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-44v2s" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.953706 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-67552" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.959374 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpt6x\" (UniqueName: \"kubernetes.io/projected/7ca7fc40-0bb8-402f-9e73-d1d267340b28-kube-api-access-bpt6x\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-66xh9\" (UID: \"7ca7fc40-0bb8-402f-9e73-d1d267340b28\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-66xh9" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.976686 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-d995k"] Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.978551 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-d995k" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.980372 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-rcfgd" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.982202 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-69f4484999-lb8n4" Dec 10 12:31:53 crc kubenswrapper[4689]: I1210 12:31:53.994395 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-59fd99cc6f-gdgcj" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:53.997898 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-d995k"] Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.025587 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58d5ff84df-8xrb5"] Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.026599 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-8xrb5" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.028025 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-sqz4c" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.037017 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58d5ff84df-8xrb5"] Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.037584 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfckw\" (UniqueName: \"kubernetes.io/projected/dd93e8ba-afd7-4d03-917f-873352cfefc8-kube-api-access-qfckw\") pod \"nova-operator-controller-manager-697bc559fc-qlxtq\" (UID: \"dd93e8ba-afd7-4d03-917f-873352cfefc8\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-qlxtq" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.037632 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c84vg\" (UniqueName: \"kubernetes.io/projected/e616a259-dbf9-469a-987e-b3a6f36044a4-kube-api-access-c84vg\") pod \"placement-operator-controller-manager-78f8948974-44v2s\" (UID: \"e616a259-dbf9-469a-987e-b3a6f36044a4\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-44v2s" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.037657 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pcjk\" (UniqueName: \"kubernetes.io/projected/8c68b5ee-e36f-428f-8b70-480581f7e120-kube-api-access-5pcjk\") pod \"ovn-operator-controller-manager-b6456fdb6-v7wvl\" (UID: \"8c68b5ee-e36f-428f-8b70-480581f7e120\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v7wvl" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.037681 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fwx7br\" (UID: \"3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fwx7br" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.037709 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46gsx\" (UniqueName: \"kubernetes.io/projected/08c0b74f-cdff-4fc6-b2cf-4d61fdd4177d-kube-api-access-46gsx\") pod \"swift-operator-controller-manager-9d58d64bc-d995k\" (UID: \"08c0b74f-cdff-4fc6-b2cf-4d61fdd4177d\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-d995k" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.037742 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbkgt\" (UniqueName: \"kubernetes.io/projected/04286780-356c-4f76-9168-5a80c36d2aa3-kube-api-access-dbkgt\") pod \"octavia-operator-controller-manager-998648c74-pwvc2\" (UID: \"04286780-356c-4f76-9168-5a80c36d2aa3\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-pwvc2" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.037779 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsxgh\" (UniqueName: \"kubernetes.io/projected/9e2487e5-677b-4344-9ab2-d419e03876f2-kube-api-access-jsxgh\") pod \"telemetry-operator-controller-manager-58d5ff84df-8xrb5\" (UID: \"9e2487e5-677b-4344-9ab2-d419e03876f2\") " pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-8xrb5" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.037813 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66h9k\" (UniqueName: \"kubernetes.io/projected/3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a-kube-api-access-66h9k\") pod \"openstack-baremetal-operator-controller-manager-84b575879fwx7br\" (UID: \"3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fwx7br" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.037821 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-rwg47" Dec 10 12:31:54 crc kubenswrapper[4689]: E1210 12:31:54.038484 4689 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 12:31:54 crc kubenswrapper[4689]: E1210 12:31:54.038900 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a-cert podName:3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a nodeName:}" failed. No retries permitted until 2025-12-10 12:31:54.538516018 +0000 UTC m=+982.326597156 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fwx7br" (UID: "3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.042189 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-n69ks" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.068376 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-fghxw"] Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.069380 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fghxw" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.072605 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-m6w9c" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.073729 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-fghxw"] Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.086623 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbkgt\" (UniqueName: \"kubernetes.io/projected/04286780-356c-4f76-9168-5a80c36d2aa3-kube-api-access-dbkgt\") pod \"octavia-operator-controller-manager-998648c74-pwvc2\" (UID: \"04286780-356c-4f76-9168-5a80c36d2aa3\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-pwvc2" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.086726 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfckw\" (UniqueName: \"kubernetes.io/projected/dd93e8ba-afd7-4d03-917f-873352cfefc8-kube-api-access-qfckw\") pod \"nova-operator-controller-manager-697bc559fc-qlxtq\" (UID: \"dd93e8ba-afd7-4d03-917f-873352cfefc8\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-qlxtq" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.086931 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66h9k\" (UniqueName: \"kubernetes.io/projected/3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a-kube-api-access-66h9k\") pod \"openstack-baremetal-operator-controller-manager-84b575879fwx7br\" (UID: \"3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fwx7br" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.086958 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pcjk\" (UniqueName: \"kubernetes.io/projected/8c68b5ee-e36f-428f-8b70-480581f7e120-kube-api-access-5pcjk\") pod \"ovn-operator-controller-manager-b6456fdb6-v7wvl\" (UID: \"8c68b5ee-e36f-428f-8b70-480581f7e120\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v7wvl" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.134106 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-rq8x6"] Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.136191 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rq8x6" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.138958 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bng9\" (UniqueName: \"kubernetes.io/projected/20ec28ba-f929-4e94-833b-24a213da89a6-kube-api-access-9bng9\") pod \"test-operator-controller-manager-5854674fcc-fghxw\" (UID: \"20ec28ba-f929-4e94-833b-24a213da89a6\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-fghxw" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.139069 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c84vg\" (UniqueName: \"kubernetes.io/projected/e616a259-dbf9-469a-987e-b3a6f36044a4-kube-api-access-c84vg\") pod \"placement-operator-controller-manager-78f8948974-44v2s\" (UID: \"e616a259-dbf9-469a-987e-b3a6f36044a4\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-44v2s" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.139113 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46gsx\" (UniqueName: \"kubernetes.io/projected/08c0b74f-cdff-4fc6-b2cf-4d61fdd4177d-kube-api-access-46gsx\") pod \"swift-operator-controller-manager-9d58d64bc-d995k\" (UID: \"08c0b74f-cdff-4fc6-b2cf-4d61fdd4177d\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-d995k" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.139155 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsxgh\" (UniqueName: \"kubernetes.io/projected/9e2487e5-677b-4344-9ab2-d419e03876f2-kube-api-access-jsxgh\") pod \"telemetry-operator-controller-manager-58d5ff84df-8xrb5\" (UID: \"9e2487e5-677b-4344-9ab2-d419e03876f2\") " pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-8xrb5" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.144473 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-rq8x6"] Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.145080 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-gf5tc" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.174980 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-tzbn5" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.182656 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6fmnw" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.218673 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5997c5ddf6-h459p"] Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.220278 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5997c5ddf6-h459p" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.222982 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.223207 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.223327 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-fzb2k" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.224098 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46gsx\" (UniqueName: \"kubernetes.io/projected/08c0b74f-cdff-4fc6-b2cf-4d61fdd4177d-kube-api-access-46gsx\") pod \"swift-operator-controller-manager-9d58d64bc-d995k\" (UID: \"08c0b74f-cdff-4fc6-b2cf-4d61fdd4177d\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-d995k" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.230102 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c84vg\" (UniqueName: \"kubernetes.io/projected/e616a259-dbf9-469a-987e-b3a6f36044a4-kube-api-access-c84vg\") pod \"placement-operator-controller-manager-78f8948974-44v2s\" (UID: \"e616a259-dbf9-469a-987e-b3a6f36044a4\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-44v2s" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.230252 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsxgh\" (UniqueName: \"kubernetes.io/projected/9e2487e5-677b-4344-9ab2-d419e03876f2-kube-api-access-jsxgh\") pod \"telemetry-operator-controller-manager-58d5ff84df-8xrb5\" (UID: \"9e2487e5-677b-4344-9ab2-d419e03876f2\") " pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-8xrb5" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.243076 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5997c5ddf6-h459p"] Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.252135 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bng9\" (UniqueName: \"kubernetes.io/projected/20ec28ba-f929-4e94-833b-24a213da89a6-kube-api-access-9bng9\") pod \"test-operator-controller-manager-5854674fcc-fghxw\" (UID: \"20ec28ba-f929-4e94-833b-24a213da89a6\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-fghxw" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.252200 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsgm6\" (UniqueName: \"kubernetes.io/projected/af7cc69a-a411-43ec-b32e-41e6a343388b-kube-api-access-dsgm6\") pod \"openstack-operator-controller-manager-5997c5ddf6-h459p\" (UID: \"af7cc69a-a411-43ec-b32e-41e6a343388b\") " pod="openstack-operators/openstack-operator-controller-manager-5997c5ddf6-h459p" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.252246 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzs8g\" (UniqueName: \"kubernetes.io/projected/88c6fb38-fa6d-497e-87c7-32833f1b5a04-kube-api-access-bzs8g\") pod \"watcher-operator-controller-manager-75944c9b7-rq8x6\" (UID: \"88c6fb38-fa6d-497e-87c7-32833f1b5a04\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rq8x6" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.252294 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-metrics-certs\") pod \"openstack-operator-controller-manager-5997c5ddf6-h459p\" (UID: \"af7cc69a-a411-43ec-b32e-41e6a343388b\") " pod="openstack-operators/openstack-operator-controller-manager-5997c5ddf6-h459p" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.252341 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-webhook-certs\") pod \"openstack-operator-controller-manager-5997c5ddf6-h459p\" (UID: \"af7cc69a-a411-43ec-b32e-41e6a343388b\") " pod="openstack-operators/openstack-operator-controller-manager-5997c5ddf6-h459p" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.257334 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czdwk"] Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.257726 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-66xh9" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.258135 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czdwk" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.268365 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-blnvd" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.271924 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czdwk"] Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.277287 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-qlxtq" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.284667 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bng9\" (UniqueName: \"kubernetes.io/projected/20ec28ba-f929-4e94-833b-24a213da89a6-kube-api-access-9bng9\") pod \"test-operator-controller-manager-5854674fcc-fghxw\" (UID: \"20ec28ba-f929-4e94-833b-24a213da89a6\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-fghxw" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.287088 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pwvc2" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.301213 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-8xrb5" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.318039 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fghxw" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.323313 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v7wvl" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.354055 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgm6\" (UniqueName: \"kubernetes.io/projected/af7cc69a-a411-43ec-b32e-41e6a343388b-kube-api-access-dsgm6\") pod \"openstack-operator-controller-manager-5997c5ddf6-h459p\" (UID: \"af7cc69a-a411-43ec-b32e-41e6a343388b\") " pod="openstack-operators/openstack-operator-controller-manager-5997c5ddf6-h459p" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.354114 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzs8g\" (UniqueName: \"kubernetes.io/projected/88c6fb38-fa6d-497e-87c7-32833f1b5a04-kube-api-access-bzs8g\") pod \"watcher-operator-controller-manager-75944c9b7-rq8x6\" (UID: \"88c6fb38-fa6d-497e-87c7-32833f1b5a04\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rq8x6" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.354170 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-metrics-certs\") pod \"openstack-operator-controller-manager-5997c5ddf6-h459p\" (UID: \"af7cc69a-a411-43ec-b32e-41e6a343388b\") " pod="openstack-operators/openstack-operator-controller-manager-5997c5ddf6-h459p" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.354195 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13ec50ac-3e46-4615-88f9-070c7a647158-cert\") pod \"infra-operator-controller-manager-78d48bff9d-lkvwb\" (UID: \"13ec50ac-3e46-4615-88f9-070c7a647158\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lkvwb" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.354232 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-webhook-certs\") pod \"openstack-operator-controller-manager-5997c5ddf6-h459p\" (UID: \"af7cc69a-a411-43ec-b32e-41e6a343388b\") " pod="openstack-operators/openstack-operator-controller-manager-5997c5ddf6-h459p" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.354269 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q78k\" (UniqueName: \"kubernetes.io/projected/38a45de6-7988-4cb1-86b8-0164c52f2dc5-kube-api-access-6q78k\") pod \"rabbitmq-cluster-operator-manager-668c99d594-czdwk\" (UID: \"38a45de6-7988-4cb1-86b8-0164c52f2dc5\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czdwk" Dec 10 12:31:54 crc kubenswrapper[4689]: E1210 12:31:54.355165 4689 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 10 12:31:54 crc kubenswrapper[4689]: E1210 12:31:54.355201 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-webhook-certs podName:af7cc69a-a411-43ec-b32e-41e6a343388b nodeName:}" failed. No retries permitted until 2025-12-10 12:31:54.855189431 +0000 UTC m=+982.643270559 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-webhook-certs") pod "openstack-operator-controller-manager-5997c5ddf6-h459p" (UID: "af7cc69a-a411-43ec-b32e-41e6a343388b") : secret "webhook-server-cert" not found Dec 10 12:31:54 crc kubenswrapper[4689]: E1210 12:31:54.355227 4689 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 10 12:31:54 crc kubenswrapper[4689]: E1210 12:31:54.355307 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ec50ac-3e46-4615-88f9-070c7a647158-cert podName:13ec50ac-3e46-4615-88f9-070c7a647158 nodeName:}" failed. No retries permitted until 2025-12-10 12:31:55.355271643 +0000 UTC m=+983.143352781 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/13ec50ac-3e46-4615-88f9-070c7a647158-cert") pod "infra-operator-controller-manager-78d48bff9d-lkvwb" (UID: "13ec50ac-3e46-4615-88f9-070c7a647158") : secret "infra-operator-webhook-server-cert" not found Dec 10 12:31:54 crc kubenswrapper[4689]: E1210 12:31:54.355552 4689 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 10 12:31:54 crc kubenswrapper[4689]: E1210 12:31:54.355630 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-metrics-certs podName:af7cc69a-a411-43ec-b32e-41e6a343388b nodeName:}" failed. No retries permitted until 2025-12-10 12:31:54.855608752 +0000 UTC m=+982.643689890 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-metrics-certs") pod "openstack-operator-controller-manager-5997c5ddf6-h459p" (UID: "af7cc69a-a411-43ec-b32e-41e6a343388b") : secret "metrics-server-cert" not found Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.374506 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgm6\" (UniqueName: \"kubernetes.io/projected/af7cc69a-a411-43ec-b32e-41e6a343388b-kube-api-access-dsgm6\") pod \"openstack-operator-controller-manager-5997c5ddf6-h459p\" (UID: \"af7cc69a-a411-43ec-b32e-41e6a343388b\") " pod="openstack-operators/openstack-operator-controller-manager-5997c5ddf6-h459p" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.388660 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzs8g\" (UniqueName: \"kubernetes.io/projected/88c6fb38-fa6d-497e-87c7-32833f1b5a04-kube-api-access-bzs8g\") pod \"watcher-operator-controller-manager-75944c9b7-rq8x6\" (UID: \"88c6fb38-fa6d-497e-87c7-32833f1b5a04\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rq8x6" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.415446 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-44v2s" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.430717 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-d995k" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.457457 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q78k\" (UniqueName: \"kubernetes.io/projected/38a45de6-7988-4cb1-86b8-0164c52f2dc5-kube-api-access-6q78k\") pod \"rabbitmq-cluster-operator-manager-668c99d594-czdwk\" (UID: \"38a45de6-7988-4cb1-86b8-0164c52f2dc5\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czdwk" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.491712 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q78k\" (UniqueName: \"kubernetes.io/projected/38a45de6-7988-4cb1-86b8-0164c52f2dc5-kube-api-access-6q78k\") pod \"rabbitmq-cluster-operator-manager-668c99d594-czdwk\" (UID: \"38a45de6-7988-4cb1-86b8-0164c52f2dc5\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czdwk" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.492849 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-4hdrw"] Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.558767 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fwx7br\" (UID: \"3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fwx7br" Dec 10 12:31:54 crc kubenswrapper[4689]: E1210 12:31:54.559203 4689 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 12:31:54 crc kubenswrapper[4689]: E1210 12:31:54.559348 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a-cert podName:3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a nodeName:}" failed. No retries permitted until 2025-12-10 12:31:55.559325394 +0000 UTC m=+983.347406612 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fwx7br" (UID: "3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.631705 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rq8x6" Dec 10 12:31:54 crc kubenswrapper[4689]: W1210 12:31:54.639389 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6db2b03_cb28_4161_bae6_6eecce28c871.slice/crio-24dcbb0508265008f9728ca07d643a68ccf35683bf8d12f96ba39a353e0c5dfb WatchSource:0}: Error finding container 24dcbb0508265008f9728ca07d643a68ccf35683bf8d12f96ba39a353e0c5dfb: Status 404 returned error can't find the container with id 24dcbb0508265008f9728ca07d643a68ccf35683bf8d12f96ba39a353e0c5dfb Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.641268 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-dtclt"] Dec 10 12:31:54 crc kubenswrapper[4689]: W1210 12:31:54.646277 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b413153_162d_46ad_9b9a_b44869127ee7.slice/crio-8140b8083339fb78dd91e96235340c0df541f654caf5c1f1ba53c0e5ce0a86a3 WatchSource:0}: Error finding container 8140b8083339fb78dd91e96235340c0df541f654caf5c1f1ba53c0e5ce0a86a3: Status 404 returned error can't find the container with id 8140b8083339fb78dd91e96235340c0df541f654caf5c1f1ba53c0e5ce0a86a3 Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.695265 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czdwk" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.788351 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gk2zb"] Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.793900 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-46w9f"] Dec 10 12:31:54 crc kubenswrapper[4689]: W1210 12:31:54.837041 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc70e0866_b017_4945_9e4b_c69eec327948.slice/crio-5b14a15f78f239c5aa8f0905adc6088ec3366884a3c1f13df74481dea7f7609d WatchSource:0}: Error finding container 5b14a15f78f239c5aa8f0905adc6088ec3366884a3c1f13df74481dea7f7609d: Status 404 returned error can't find the container with id 5b14a15f78f239c5aa8f0905adc6088ec3366884a3c1f13df74481dea7f7609d Dec 10 12:31:54 crc kubenswrapper[4689]: W1210 12:31:54.837452 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec9c74bb_c8dc_409b_817c_74963a395df8.slice/crio-01c0c313751910038559007f351c217a6add167719b9c68b942740b84b98de2d WatchSource:0}: Error finding container 01c0c313751910038559007f351c217a6add167719b9c68b942740b84b98de2d: Status 404 returned error can't find the container with id 01c0c313751910038559007f351c217a6add167719b9c68b942740b84b98de2d Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.872998 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-metrics-certs\") pod \"openstack-operator-controller-manager-5997c5ddf6-h459p\" (UID: \"af7cc69a-a411-43ec-b32e-41e6a343388b\") " pod="openstack-operators/openstack-operator-controller-manager-5997c5ddf6-h459p" Dec 10 12:31:54 crc kubenswrapper[4689]: I1210 12:31:54.873059 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-webhook-certs\") pod \"openstack-operator-controller-manager-5997c5ddf6-h459p\" (UID: \"af7cc69a-a411-43ec-b32e-41e6a343388b\") " pod="openstack-operators/openstack-operator-controller-manager-5997c5ddf6-h459p" Dec 10 12:31:54 crc kubenswrapper[4689]: E1210 12:31:54.873210 4689 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 10 12:31:54 crc kubenswrapper[4689]: E1210 12:31:54.873260 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-webhook-certs podName:af7cc69a-a411-43ec-b32e-41e6a343388b nodeName:}" failed. No retries permitted until 2025-12-10 12:31:55.873245039 +0000 UTC m=+983.661326177 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-webhook-certs") pod "openstack-operator-controller-manager-5997c5ddf6-h459p" (UID: "af7cc69a-a411-43ec-b32e-41e6a343388b") : secret "webhook-server-cert" not found Dec 10 12:31:54 crc kubenswrapper[4689]: E1210 12:31:54.873360 4689 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 10 12:31:54 crc kubenswrapper[4689]: E1210 12:31:54.873452 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-metrics-certs podName:af7cc69a-a411-43ec-b32e-41e6a343388b nodeName:}" failed. No retries permitted until 2025-12-10 12:31:55.873430943 +0000 UTC m=+983.661512081 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-metrics-certs") pod "openstack-operator-controller-manager-5997c5ddf6-h459p" (UID: "af7cc69a-a411-43ec-b32e-41e6a343388b") : secret "metrics-server-cert" not found Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:54.994855 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-n69ks"] Dec 10 12:31:55 crc kubenswrapper[4689]: W1210 12:31:54.998639 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4dc52122_5456_453a_9d5f_d2fce910bb61.slice/crio-e9cd048e1a0d4ba32db5a1a06f99673c5a68ff8306eeaca5e98fab8e6f750589 WatchSource:0}: Error finding container e9cd048e1a0d4ba32db5a1a06f99673c5a68ff8306eeaca5e98fab8e6f750589: Status 404 returned error can't find the container with id e9cd048e1a0d4ba32db5a1a06f99673c5a68ff8306eeaca5e98fab8e6f750589 Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.031426 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-66xh9"] Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.041127 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69f4484999-lb8n4"] Dec 10 12:31:55 crc kubenswrapper[4689]: W1210 12:31:55.041352 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ca7fc40_0bb8_402f_9e73_d1d267340b28.slice/crio-947e339de229ff81eff49c9d26ec04329e99257cae5b766bffd3efd69fbb5721 WatchSource:0}: Error finding container 947e339de229ff81eff49c9d26ec04329e99257cae5b766bffd3efd69fbb5721: Status 404 returned error can't find the container with id 947e339de229ff81eff49c9d26ec04329e99257cae5b766bffd3efd69fbb5721 Dec 10 12:31:55 crc kubenswrapper[4689]: W1210 12:31:55.047222 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1c471e3_8ecc_4db9_95c0_a4a13e287aba.slice/crio-ce2500a83bcc1d836f1adb69c68b6585d5a43c8c7a4fb9ea2c20e86c9fd04632 WatchSource:0}: Error finding container ce2500a83bcc1d836f1adb69c68b6585d5a43c8c7a4fb9ea2c20e86c9fd04632: Status 404 returned error can't find the container with id ce2500a83bcc1d836f1adb69c68b6585d5a43c8c7a4fb9ea2c20e86c9fd04632 Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.052212 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-tzbn5"] Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.058583 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-rwg47"] Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.063226 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-59fd99cc6f-gdgcj"] Dec 10 12:31:55 crc kubenswrapper[4689]: W1210 12:31:55.068226 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8209377b_970c_4faf_ac5b_1e429d2bdccd.slice/crio-ec0a55c687e6c88998d50c91051c7b4bbf1899b1df4243f813fabbc8894513e4 WatchSource:0}: Error finding container ec0a55c687e6c88998d50c91051c7b4bbf1899b1df4243f813fabbc8894513e4: Status 404 returned error can't find the container with id ec0a55c687e6c88998d50c91051c7b4bbf1899b1df4243f813fabbc8894513e4 Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.254435 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-qlxtq"] Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.264921 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58d5ff84df-8xrb5"] Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.272159 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-fghxw"] Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.277224 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-44v2s"] Dec 10 12:31:55 crc kubenswrapper[4689]: W1210 12:31:55.279935 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20ec28ba_f929_4e94_833b_24a213da89a6.slice/crio-06959afcf5bb0ff2eb91dc12b78857b9bc284a1dcf711b82a2891a67e4f10af2 WatchSource:0}: Error finding container 06959afcf5bb0ff2eb91dc12b78857b9bc284a1dcf711b82a2891a67e4f10af2: Status 404 returned error can't find the container with id 06959afcf5bb0ff2eb91dc12b78857b9bc284a1dcf711b82a2891a67e4f10af2 Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.280932 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-pwvc2"] Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.285772 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6fmnw"] Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.289964 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-rq8x6"] Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.291938 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dbkgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-pwvc2_openstack-operators(04286780-356c-4f76-9168-5a80c36d2aa3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.292157 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qfckw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-qlxtq_openstack-operators(dd93e8ba-afd7-4d03-917f-873352cfefc8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.292306 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9bng9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-fghxw_openstack-operators(20ec28ba-f929-4e94-833b-24a213da89a6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.292437 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bzs8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-75944c9b7-rq8x6_openstack-operators(88c6fb38-fa6d-497e-87c7-32833f1b5a04): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.295412 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dbkgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-pwvc2_openstack-operators(04286780-356c-4f76-9168-5a80c36d2aa3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.295530 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9bng9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-fghxw_openstack-operators(20ec28ba-f929-4e94-833b-24a213da89a6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.295770 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qfckw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-qlxtq_openstack-operators(dd93e8ba-afd7-4d03-917f-873352cfefc8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.296141 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bzs8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-75944c9b7-rq8x6_openstack-operators(88c6fb38-fa6d-497e-87c7-32833f1b5a04): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.296388 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c84vg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-44v2s_openstack-operators(e616a259-dbf9-469a-987e-b3a6f36044a4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.297147 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fghxw" podUID="20ec28ba-f929-4e94-833b-24a213da89a6" Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.297175 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pwvc2" podUID="04286780-356c-4f76-9168-5a80c36d2aa3" Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.297205 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-qlxtq" podUID="dd93e8ba-afd7-4d03-917f-873352cfefc8" Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.297224 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rq8x6" podUID="88c6fb38-fa6d-497e-87c7-32833f1b5a04" Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.299485 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c84vg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-44v2s_openstack-operators(e616a259-dbf9-469a-987e-b3a6f36044a4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.300727 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-44v2s" podUID="e616a259-dbf9-469a-987e-b3a6f36044a4" Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.378857 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13ec50ac-3e46-4615-88f9-070c7a647158-cert\") pod \"infra-operator-controller-manager-78d48bff9d-lkvwb\" (UID: \"13ec50ac-3e46-4615-88f9-070c7a647158\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lkvwb" Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.379073 4689 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.379309 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ec50ac-3e46-4615-88f9-070c7a647158-cert podName:13ec50ac-3e46-4615-88f9-070c7a647158 nodeName:}" failed. No retries permitted until 2025-12-10 12:31:57.379291808 +0000 UTC m=+985.167372946 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/13ec50ac-3e46-4615-88f9-070c7a647158-cert") pod "infra-operator-controller-manager-78d48bff9d-lkvwb" (UID: "13ec50ac-3e46-4615-88f9-070c7a647158") : secret "infra-operator-webhook-server-cert" not found Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.463030 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czdwk"] Dec 10 12:31:55 crc kubenswrapper[4689]: W1210 12:31:55.470930 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38a45de6_7988_4cb1_86b8_0164c52f2dc5.slice/crio-6be664ece66ff6626b049d5483642f5eb77ec1e0e2dcb16bfc68ca4b24e2f7e3 WatchSource:0}: Error finding container 6be664ece66ff6626b049d5483642f5eb77ec1e0e2dcb16bfc68ca4b24e2f7e3: Status 404 returned error can't find the container with id 6be664ece66ff6626b049d5483642f5eb77ec1e0e2dcb16bfc68ca4b24e2f7e3 Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.473368 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-d995k"] Dec 10 12:31:55 crc kubenswrapper[4689]: W1210 12:31:55.474915 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c68b5ee_e36f_428f_8b70_480581f7e120.slice/crio-349f068c1993f959303b713ac0e709600cc2163f86fa68f44e75c3c2bcbcc079 WatchSource:0}: Error finding container 349f068c1993f959303b713ac0e709600cc2163f86fa68f44e75c3c2bcbcc079: Status 404 returned error can't find the container with id 349f068c1993f959303b713ac0e709600cc2163f86fa68f44e75c3c2bcbcc079 Dec 10 12:31:55 crc kubenswrapper[4689]: W1210 12:31:55.478506 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08c0b74f_cdff_4fc6_b2cf_4d61fdd4177d.slice/crio-05807f7002e8a6c7256374ab2318543b1d352bcc6310a82a4c3cf572fa704725 WatchSource:0}: Error finding container 05807f7002e8a6c7256374ab2318543b1d352bcc6310a82a4c3cf572fa704725: Status 404 returned error can't find the container with id 05807f7002e8a6c7256374ab2318543b1d352bcc6310a82a4c3cf572fa704725 Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.478763 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-v7wvl"] Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.479251 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5pcjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-v7wvl_openstack-operators(8c68b5ee-e36f-428f-8b70-480581f7e120): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.481414 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5pcjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-v7wvl_openstack-operators(8c68b5ee-e36f-428f-8b70-480581f7e120): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.482568 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v7wvl" podUID="8c68b5ee-e36f-428f-8b70-480581f7e120" Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.482636 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-46gsx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9d58d64bc-d995k_openstack-operators(08c0b74f-cdff-4fc6-b2cf-4d61fdd4177d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.484942 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-46gsx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9d58d64bc-d995k_openstack-operators(08c0b74f-cdff-4fc6-b2cf-4d61fdd4177d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.486614 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-d995k" podUID="08c0b74f-cdff-4fc6-b2cf-4d61fdd4177d" Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.537018 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gk2zb" event={"ID":"c70e0866-b017-4945-9e4b-c69eec327948","Type":"ContainerStarted","Data":"5b14a15f78f239c5aa8f0905adc6088ec3366884a3c1f13df74481dea7f7609d"} Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.539036 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-46w9f" event={"ID":"ec9c74bb-c8dc-409b-817c-74963a395df8","Type":"ContainerStarted","Data":"01c0c313751910038559007f351c217a6add167719b9c68b942740b84b98de2d"} Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.540661 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fghxw" event={"ID":"20ec28ba-f929-4e94-833b-24a213da89a6","Type":"ContainerStarted","Data":"06959afcf5bb0ff2eb91dc12b78857b9bc284a1dcf711b82a2891a67e4f10af2"} Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.543156 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fghxw" podUID="20ec28ba-f929-4e94-833b-24a213da89a6" Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.545258 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6fmnw" event={"ID":"e9f4ae72-b49e-4144-a49b-72c2bbd1b77c","Type":"ContainerStarted","Data":"6952047ee06165ee294ca6f32de16ca49520929e65019f64e95e9d125ebaabd4"} Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.550811 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-8xrb5" event={"ID":"9e2487e5-677b-4344-9ab2-d419e03876f2","Type":"ContainerStarted","Data":"a72c8549ac972143ebbb624e1abedb39c51d3fa49ac05582592c606f0b257592"} Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.552607 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pwvc2" event={"ID":"04286780-356c-4f76-9168-5a80c36d2aa3","Type":"ContainerStarted","Data":"1d2596818096bd2c8863302ccb0a31d17b2573fd26cc0bf1021f21f4c95181ad"} Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.554595 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-59fd99cc6f-gdgcj" event={"ID":"c346bbde-239c-4f76-91da-c4116ad0a487","Type":"ContainerStarted","Data":"229d0706300d860bbd3d40f5e2efb5a6663019f9363043ebb8828f2a386aa845"} Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.555049 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pwvc2" podUID="04286780-356c-4f76-9168-5a80c36d2aa3" Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.558238 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rq8x6" event={"ID":"88c6fb38-fa6d-497e-87c7-32833f1b5a04","Type":"ContainerStarted","Data":"4acf7bd71bb8a1ee1d200c7a0aeb138868b682dbc99ad66e6881a197bd96f1e2"} Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.560328 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rq8x6" podUID="88c6fb38-fa6d-497e-87c7-32833f1b5a04" Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.567279 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-66xh9" event={"ID":"7ca7fc40-0bb8-402f-9e73-d1d267340b28","Type":"ContainerStarted","Data":"947e339de229ff81eff49c9d26ec04329e99257cae5b766bffd3efd69fbb5721"} Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.573053 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69f4484999-lb8n4" event={"ID":"e1c471e3-8ecc-4db9-95c0-a4a13e287aba","Type":"ContainerStarted","Data":"ce2500a83bcc1d836f1adb69c68b6585d5a43c8c7a4fb9ea2c20e86c9fd04632"} Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.578751 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-n69ks" event={"ID":"4dc52122-5456-453a-9d5f-d2fce910bb61","Type":"ContainerStarted","Data":"e9cd048e1a0d4ba32db5a1a06f99673c5a68ff8306eeaca5e98fab8e6f750589"} Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.585578 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fwx7br\" (UID: \"3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fwx7br" Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.585711 4689 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.585754 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a-cert podName:3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a nodeName:}" failed. No retries permitted until 2025-12-10 12:31:57.585739587 +0000 UTC m=+985.373820715 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fwx7br" (UID: "3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.591075 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-tzbn5" event={"ID":"670989a1-8b21-473a-8624-862930a7d70b","Type":"ContainerStarted","Data":"a8e9c78ca57cbccac7876f743f4d11c832f79cec94bd8e330da4b50d1f91d2a3"} Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.604424 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-dtclt" event={"ID":"0b413153-162d-46ad-9b9a-b44869127ee7","Type":"ContainerStarted","Data":"8140b8083339fb78dd91e96235340c0df541f654caf5c1f1ba53c0e5ce0a86a3"} Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.606138 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czdwk" event={"ID":"38a45de6-7988-4cb1-86b8-0164c52f2dc5","Type":"ContainerStarted","Data":"6be664ece66ff6626b049d5483642f5eb77ec1e0e2dcb16bfc68ca4b24e2f7e3"} Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.608267 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-rwg47" event={"ID":"8209377b-970c-4faf-ac5b-1e429d2bdccd","Type":"ContainerStarted","Data":"ec0a55c687e6c88998d50c91051c7b4bbf1899b1df4243f813fabbc8894513e4"} Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.610111 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-qlxtq" event={"ID":"dd93e8ba-afd7-4d03-917f-873352cfefc8","Type":"ContainerStarted","Data":"c8895b7969159e99358fdf61151afd256c10d6306a07c670b4b212636fea2c1a"} Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.611539 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v7wvl" event={"ID":"8c68b5ee-e36f-428f-8b70-480581f7e120","Type":"ContainerStarted","Data":"349f068c1993f959303b713ac0e709600cc2163f86fa68f44e75c3c2bcbcc079"} Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.615996 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-qlxtq" podUID="dd93e8ba-afd7-4d03-917f-873352cfefc8" Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.616166 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4hdrw" event={"ID":"e6db2b03-cb28-4161-bae6-6eecce28c871","Type":"ContainerStarted","Data":"24dcbb0508265008f9728ca07d643a68ccf35683bf8d12f96ba39a353e0c5dfb"} Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.617214 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-d995k" event={"ID":"08c0b74f-cdff-4fc6-b2cf-4d61fdd4177d","Type":"ContainerStarted","Data":"05807f7002e8a6c7256374ab2318543b1d352bcc6310a82a4c3cf572fa704725"} Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.618670 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-d995k" podUID="08c0b74f-cdff-4fc6-b2cf-4d61fdd4177d" Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.618757 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-44v2s" event={"ID":"e616a259-dbf9-469a-987e-b3a6f36044a4","Type":"ContainerStarted","Data":"6833fc82728e2ce00f7e2794222195666014153c30da407777b78878ea74af6a"} Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.619710 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v7wvl" podUID="8c68b5ee-e36f-428f-8b70-480581f7e120" Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.620209 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-44v2s" podUID="e616a259-dbf9-469a-987e-b3a6f36044a4" Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.889089 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-webhook-certs\") pod \"openstack-operator-controller-manager-5997c5ddf6-h459p\" (UID: \"af7cc69a-a411-43ec-b32e-41e6a343388b\") " pod="openstack-operators/openstack-operator-controller-manager-5997c5ddf6-h459p" Dec 10 12:31:55 crc kubenswrapper[4689]: I1210 12:31:55.889257 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-metrics-certs\") pod \"openstack-operator-controller-manager-5997c5ddf6-h459p\" (UID: \"af7cc69a-a411-43ec-b32e-41e6a343388b\") " pod="openstack-operators/openstack-operator-controller-manager-5997c5ddf6-h459p" Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.889271 4689 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.889339 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-webhook-certs podName:af7cc69a-a411-43ec-b32e-41e6a343388b nodeName:}" failed. No retries permitted until 2025-12-10 12:31:57.889321775 +0000 UTC m=+985.677402913 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-webhook-certs") pod "openstack-operator-controller-manager-5997c5ddf6-h459p" (UID: "af7cc69a-a411-43ec-b32e-41e6a343388b") : secret "webhook-server-cert" not found Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.889446 4689 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 10 12:31:55 crc kubenswrapper[4689]: E1210 12:31:55.889518 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-metrics-certs podName:af7cc69a-a411-43ec-b32e-41e6a343388b nodeName:}" failed. No retries permitted until 2025-12-10 12:31:57.88950012 +0000 UTC m=+985.677581258 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-metrics-certs") pod "openstack-operator-controller-manager-5997c5ddf6-h459p" (UID: "af7cc69a-a411-43ec-b32e-41e6a343388b") : secret "metrics-server-cert" not found Dec 10 12:31:56 crc kubenswrapper[4689]: E1210 12:31:56.631420 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fghxw" podUID="20ec28ba-f929-4e94-833b-24a213da89a6" Dec 10 12:31:56 crc kubenswrapper[4689]: E1210 12:31:56.631886 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-qlxtq" podUID="dd93e8ba-afd7-4d03-917f-873352cfefc8" Dec 10 12:31:56 crc kubenswrapper[4689]: E1210 12:31:56.632283 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pwvc2" podUID="04286780-356c-4f76-9168-5a80c36d2aa3" Dec 10 12:31:56 crc kubenswrapper[4689]: E1210 12:31:56.632368 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-d995k" podUID="08c0b74f-cdff-4fc6-b2cf-4d61fdd4177d" Dec 10 12:31:56 crc kubenswrapper[4689]: E1210 12:31:56.632610 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-44v2s" podUID="e616a259-dbf9-469a-987e-b3a6f36044a4" Dec 10 12:31:56 crc kubenswrapper[4689]: E1210 12:31:56.633926 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v7wvl" podUID="8c68b5ee-e36f-428f-8b70-480581f7e120" Dec 10 12:31:56 crc kubenswrapper[4689]: E1210 12:31:56.634899 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rq8x6" podUID="88c6fb38-fa6d-497e-87c7-32833f1b5a04" Dec 10 12:31:57 crc kubenswrapper[4689]: I1210 12:31:57.414102 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13ec50ac-3e46-4615-88f9-070c7a647158-cert\") pod \"infra-operator-controller-manager-78d48bff9d-lkvwb\" (UID: \"13ec50ac-3e46-4615-88f9-070c7a647158\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lkvwb" Dec 10 12:31:57 crc kubenswrapper[4689]: E1210 12:31:57.414265 4689 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 10 12:31:57 crc kubenswrapper[4689]: E1210 12:31:57.414323 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ec50ac-3e46-4615-88f9-070c7a647158-cert podName:13ec50ac-3e46-4615-88f9-070c7a647158 nodeName:}" failed. No retries permitted until 2025-12-10 12:32:01.414308961 +0000 UTC m=+989.202390099 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/13ec50ac-3e46-4615-88f9-070c7a647158-cert") pod "infra-operator-controller-manager-78d48bff9d-lkvwb" (UID: "13ec50ac-3e46-4615-88f9-070c7a647158") : secret "infra-operator-webhook-server-cert" not found Dec 10 12:31:57 crc kubenswrapper[4689]: I1210 12:31:57.616402 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fwx7br\" (UID: \"3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fwx7br" Dec 10 12:31:57 crc kubenswrapper[4689]: E1210 12:31:57.616551 4689 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 12:31:57 crc kubenswrapper[4689]: E1210 12:31:57.616610 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a-cert podName:3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a nodeName:}" failed. No retries permitted until 2025-12-10 12:32:01.616594917 +0000 UTC m=+989.404676055 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fwx7br" (UID: "3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 12:31:57 crc kubenswrapper[4689]: I1210 12:31:57.920707 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-metrics-certs\") pod \"openstack-operator-controller-manager-5997c5ddf6-h459p\" (UID: \"af7cc69a-a411-43ec-b32e-41e6a343388b\") " pod="openstack-operators/openstack-operator-controller-manager-5997c5ddf6-h459p" Dec 10 12:31:57 crc kubenswrapper[4689]: I1210 12:31:57.921086 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-webhook-certs\") pod \"openstack-operator-controller-manager-5997c5ddf6-h459p\" (UID: \"af7cc69a-a411-43ec-b32e-41e6a343388b\") " pod="openstack-operators/openstack-operator-controller-manager-5997c5ddf6-h459p" Dec 10 12:31:57 crc kubenswrapper[4689]: E1210 12:31:57.921325 4689 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 10 12:31:57 crc kubenswrapper[4689]: E1210 12:31:57.921376 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-webhook-certs podName:af7cc69a-a411-43ec-b32e-41e6a343388b nodeName:}" failed. No retries permitted until 2025-12-10 12:32:01.921361795 +0000 UTC m=+989.709442933 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-webhook-certs") pod "openstack-operator-controller-manager-5997c5ddf6-h459p" (UID: "af7cc69a-a411-43ec-b32e-41e6a343388b") : secret "webhook-server-cert" not found Dec 10 12:31:57 crc kubenswrapper[4689]: E1210 12:31:57.921714 4689 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 10 12:31:57 crc kubenswrapper[4689]: E1210 12:31:57.921740 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-metrics-certs podName:af7cc69a-a411-43ec-b32e-41e6a343388b nodeName:}" failed. No retries permitted until 2025-12-10 12:32:01.921732494 +0000 UTC m=+989.709813632 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-metrics-certs") pod "openstack-operator-controller-manager-5997c5ddf6-h459p" (UID: "af7cc69a-a411-43ec-b32e-41e6a343388b") : secret "metrics-server-cert" not found Dec 10 12:32:01 crc kubenswrapper[4689]: I1210 12:32:01.487101 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13ec50ac-3e46-4615-88f9-070c7a647158-cert\") pod \"infra-operator-controller-manager-78d48bff9d-lkvwb\" (UID: \"13ec50ac-3e46-4615-88f9-070c7a647158\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lkvwb" Dec 10 12:32:01 crc kubenswrapper[4689]: E1210 12:32:01.487323 4689 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 10 12:32:01 crc kubenswrapper[4689]: E1210 12:32:01.487448 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ec50ac-3e46-4615-88f9-070c7a647158-cert podName:13ec50ac-3e46-4615-88f9-070c7a647158 nodeName:}" failed. No retries permitted until 2025-12-10 12:32:09.487415715 +0000 UTC m=+997.275496933 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/13ec50ac-3e46-4615-88f9-070c7a647158-cert") pod "infra-operator-controller-manager-78d48bff9d-lkvwb" (UID: "13ec50ac-3e46-4615-88f9-070c7a647158") : secret "infra-operator-webhook-server-cert" not found Dec 10 12:32:01 crc kubenswrapper[4689]: I1210 12:32:01.690045 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fwx7br\" (UID: \"3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fwx7br" Dec 10 12:32:01 crc kubenswrapper[4689]: E1210 12:32:01.690415 4689 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 12:32:01 crc kubenswrapper[4689]: E1210 12:32:01.690496 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a-cert podName:3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a nodeName:}" failed. No retries permitted until 2025-12-10 12:32:09.6904722 +0000 UTC m=+997.478553378 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fwx7br" (UID: "3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 12:32:01 crc kubenswrapper[4689]: I1210 12:32:01.998301 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-metrics-certs\") pod \"openstack-operator-controller-manager-5997c5ddf6-h459p\" (UID: \"af7cc69a-a411-43ec-b32e-41e6a343388b\") " pod="openstack-operators/openstack-operator-controller-manager-5997c5ddf6-h459p" Dec 10 12:32:01 crc kubenswrapper[4689]: I1210 12:32:01.998438 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-webhook-certs\") pod \"openstack-operator-controller-manager-5997c5ddf6-h459p\" (UID: \"af7cc69a-a411-43ec-b32e-41e6a343388b\") " pod="openstack-operators/openstack-operator-controller-manager-5997c5ddf6-h459p" Dec 10 12:32:01 crc kubenswrapper[4689]: E1210 12:32:01.998487 4689 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 10 12:32:01 crc kubenswrapper[4689]: E1210 12:32:01.998555 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-metrics-certs podName:af7cc69a-a411-43ec-b32e-41e6a343388b nodeName:}" failed. No retries permitted until 2025-12-10 12:32:09.99853702 +0000 UTC m=+997.786618158 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-metrics-certs") pod "openstack-operator-controller-manager-5997c5ddf6-h459p" (UID: "af7cc69a-a411-43ec-b32e-41e6a343388b") : secret "metrics-server-cert" not found Dec 10 12:32:01 crc kubenswrapper[4689]: E1210 12:32:01.998742 4689 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 10 12:32:01 crc kubenswrapper[4689]: E1210 12:32:01.998848 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-webhook-certs podName:af7cc69a-a411-43ec-b32e-41e6a343388b nodeName:}" failed. No retries permitted until 2025-12-10 12:32:09.998818057 +0000 UTC m=+997.786899245 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-webhook-certs") pod "openstack-operator-controller-manager-5997c5ddf6-h459p" (UID: "af7cc69a-a411-43ec-b32e-41e6a343388b") : secret "webhook-server-cert" not found Dec 10 12:32:05 crc kubenswrapper[4689]: E1210 12:32:05.954819 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rmhhs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-59fd99cc6f-gdgcj_openstack-operators(c346bbde-239c-4f76-91da-c4116ad0a487): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 12:32:05 crc kubenswrapper[4689]: E1210 12:32:05.954839 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8bvsx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-6fmnw_openstack-operators(e9f4ae72-b49e-4144-a49b-72c2bbd1b77c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 12:32:05 crc kubenswrapper[4689]: E1210 12:32:05.956387 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6fmnw" podUID="e9f4ae72-b49e-4144-a49b-72c2bbd1b77c" Dec 10 12:32:05 crc kubenswrapper[4689]: E1210 12:32:05.956390 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/keystone-operator-controller-manager-59fd99cc6f-gdgcj" podUID="c346bbde-239c-4f76-91da-c4116ad0a487" Dec 10 12:32:06 crc kubenswrapper[4689]: I1210 12:32:06.707670 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-8xrb5" event={"ID":"9e2487e5-677b-4344-9ab2-d419e03876f2","Type":"ContainerStarted","Data":"751a1d3a05ca56a6aa5b9eb17bc70e26b00ba390698dcb3918b84f5123d4e5cc"} Dec 10 12:32:06 crc kubenswrapper[4689]: I1210 12:32:06.717470 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6fmnw" event={"ID":"e9f4ae72-b49e-4144-a49b-72c2bbd1b77c","Type":"ContainerStarted","Data":"323b76de4cb7fccf2c5f201a7195ebb307160a823180b3b2f99ad429201e448e"} Dec 10 12:32:06 crc kubenswrapper[4689]: I1210 12:32:06.718226 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6fmnw" Dec 10 12:32:06 crc kubenswrapper[4689]: E1210 12:32:06.719098 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6fmnw" podUID="e9f4ae72-b49e-4144-a49b-72c2bbd1b77c" Dec 10 12:32:06 crc kubenswrapper[4689]: I1210 12:32:06.720696 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-tzbn5" event={"ID":"670989a1-8b21-473a-8624-862930a7d70b","Type":"ContainerStarted","Data":"d005cc43ada0e428345ff3052bce9852e8f7d1192dff2290f9892a7cea38c483"} Dec 10 12:32:06 crc kubenswrapper[4689]: I1210 12:32:06.751039 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-dtclt" event={"ID":"0b413153-162d-46ad-9b9a-b44869127ee7","Type":"ContainerStarted","Data":"77c6198daf6118cb9f804830a2449fa88dbf1e76daf7c2cacf1759231720d701"} Dec 10 12:32:06 crc kubenswrapper[4689]: I1210 12:32:06.752731 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-59fd99cc6f-gdgcj" event={"ID":"c346bbde-239c-4f76-91da-c4116ad0a487","Type":"ContainerStarted","Data":"c290640064aaf65cab6c2e8db3b4e4a7c785e0ccb6926b33bf5ed924d9d3b529"} Dec 10 12:32:06 crc kubenswrapper[4689]: I1210 12:32:06.752853 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-59fd99cc6f-gdgcj" Dec 10 12:32:06 crc kubenswrapper[4689]: E1210 12:32:06.756878 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-59fd99cc6f-gdgcj" podUID="c346bbde-239c-4f76-91da-c4116ad0a487" Dec 10 12:32:06 crc kubenswrapper[4689]: I1210 12:32:06.757596 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-rwg47" event={"ID":"8209377b-970c-4faf-ac5b-1e429d2bdccd","Type":"ContainerStarted","Data":"365dd6ca9248058fef19d8dc9b601fad1efc844cf438fda3d20be46a45b2ce7c"} Dec 10 12:32:06 crc kubenswrapper[4689]: I1210 12:32:06.758598 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-46w9f" event={"ID":"ec9c74bb-c8dc-409b-817c-74963a395df8","Type":"ContainerStarted","Data":"f130a902d5c37a792d1947a2b136ca8b6fcb17dd25ded8b158f9dbe10df9f41e"} Dec 10 12:32:06 crc kubenswrapper[4689]: I1210 12:32:06.763991 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-66xh9" event={"ID":"7ca7fc40-0bb8-402f-9e73-d1d267340b28","Type":"ContainerStarted","Data":"ba49d291bb8849d080a2f97600a9fc9a3e4e90b4a727c4ad53d055cf0b3d6cbd"} Dec 10 12:32:06 crc kubenswrapper[4689]: I1210 12:32:06.765730 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69f4484999-lb8n4" event={"ID":"e1c471e3-8ecc-4db9-95c0-a4a13e287aba","Type":"ContainerStarted","Data":"8feeab035ff3177fb75115699f4ef039cb7938c83b9ca6fb548656e78bc72c0e"} Dec 10 12:32:06 crc kubenswrapper[4689]: I1210 12:32:06.767317 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-n69ks" event={"ID":"4dc52122-5456-453a-9d5f-d2fce910bb61","Type":"ContainerStarted","Data":"701dc2fc7396a689e2477dc83cd571ef5f326b7237fd6d770ce0f6d34b60af72"} Dec 10 12:32:06 crc kubenswrapper[4689]: I1210 12:32:06.768592 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czdwk" event={"ID":"38a45de6-7988-4cb1-86b8-0164c52f2dc5","Type":"ContainerStarted","Data":"85d23a0c4aff47e4a42db2af1748c00c365fa184d73df5e788c74bfd2ca5894c"} Dec 10 12:32:06 crc kubenswrapper[4689]: I1210 12:32:06.770403 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4hdrw" event={"ID":"e6db2b03-cb28-4161-bae6-6eecce28c871","Type":"ContainerStarted","Data":"e9a6abf6a511de15eee73aa3754a3bbc795a07f2c64e1a151c55f35da7acfd0e"} Dec 10 12:32:06 crc kubenswrapper[4689]: I1210 12:32:06.785196 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gk2zb" event={"ID":"c70e0866-b017-4945-9e4b-c69eec327948","Type":"ContainerStarted","Data":"7bc1f5bbf737e1a878c44398deffec5ce61f6e71867ceb84eac63906c529a38c"} Dec 10 12:32:06 crc kubenswrapper[4689]: I1210 12:32:06.850288 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czdwk" podStartSLOduration=2.7866082629999998 podStartE2EDuration="12.850271698s" podCreationTimestamp="2025-12-10 12:31:54 +0000 UTC" firstStartedPulling="2025-12-10 12:31:55.47539244 +0000 UTC m=+983.263473568" lastFinishedPulling="2025-12-10 12:32:05.539055845 +0000 UTC m=+993.327137003" observedRunningTime="2025-12-10 12:32:06.823079622 +0000 UTC m=+994.611160760" watchObservedRunningTime="2025-12-10 12:32:06.850271698 +0000 UTC m=+994.638352836" Dec 10 12:32:07 crc kubenswrapper[4689]: E1210 12:32:07.810958 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-59fd99cc6f-gdgcj" podUID="c346bbde-239c-4f76-91da-c4116ad0a487" Dec 10 12:32:07 crc kubenswrapper[4689]: E1210 12:32:07.813697 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6fmnw" podUID="e9f4ae72-b49e-4144-a49b-72c2bbd1b77c" Dec 10 12:32:09 crc kubenswrapper[4689]: I1210 12:32:09.514778 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13ec50ac-3e46-4615-88f9-070c7a647158-cert\") pod \"infra-operator-controller-manager-78d48bff9d-lkvwb\" (UID: \"13ec50ac-3e46-4615-88f9-070c7a647158\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lkvwb" Dec 10 12:32:09 crc kubenswrapper[4689]: I1210 12:32:09.534078 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13ec50ac-3e46-4615-88f9-070c7a647158-cert\") pod \"infra-operator-controller-manager-78d48bff9d-lkvwb\" (UID: \"13ec50ac-3e46-4615-88f9-070c7a647158\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lkvwb" Dec 10 12:32:09 crc kubenswrapper[4689]: I1210 12:32:09.547491 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lkvwb" Dec 10 12:32:09 crc kubenswrapper[4689]: I1210 12:32:09.717310 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fwx7br\" (UID: \"3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fwx7br" Dec 10 12:32:09 crc kubenswrapper[4689]: I1210 12:32:09.721497 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fwx7br\" (UID: \"3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fwx7br" Dec 10 12:32:09 crc kubenswrapper[4689]: I1210 12:32:09.956250 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fwx7br" Dec 10 12:32:10 crc kubenswrapper[4689]: I1210 12:32:10.021815 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-metrics-certs\") pod \"openstack-operator-controller-manager-5997c5ddf6-h459p\" (UID: \"af7cc69a-a411-43ec-b32e-41e6a343388b\") " pod="openstack-operators/openstack-operator-controller-manager-5997c5ddf6-h459p" Dec 10 12:32:10 crc kubenswrapper[4689]: I1210 12:32:10.021874 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-webhook-certs\") pod \"openstack-operator-controller-manager-5997c5ddf6-h459p\" (UID: \"af7cc69a-a411-43ec-b32e-41e6a343388b\") " pod="openstack-operators/openstack-operator-controller-manager-5997c5ddf6-h459p" Dec 10 12:32:10 crc kubenswrapper[4689]: E1210 12:32:10.022071 4689 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 10 12:32:10 crc kubenswrapper[4689]: E1210 12:32:10.022075 4689 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 10 12:32:10 crc kubenswrapper[4689]: E1210 12:32:10.022177 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-metrics-certs podName:af7cc69a-a411-43ec-b32e-41e6a343388b nodeName:}" failed. No retries permitted until 2025-12-10 12:32:26.022158396 +0000 UTC m=+1013.810239534 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-metrics-certs") pod "openstack-operator-controller-manager-5997c5ddf6-h459p" (UID: "af7cc69a-a411-43ec-b32e-41e6a343388b") : secret "metrics-server-cert" not found Dec 10 12:32:10 crc kubenswrapper[4689]: E1210 12:32:10.022268 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-webhook-certs podName:af7cc69a-a411-43ec-b32e-41e6a343388b nodeName:}" failed. No retries permitted until 2025-12-10 12:32:26.022244598 +0000 UTC m=+1013.810325796 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-webhook-certs") pod "openstack-operator-controller-manager-5997c5ddf6-h459p" (UID: "af7cc69a-a411-43ec-b32e-41e6a343388b") : secret "webhook-server-cert" not found Dec 10 12:32:10 crc kubenswrapper[4689]: I1210 12:32:10.176221 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-lkvwb"] Dec 10 12:32:10 crc kubenswrapper[4689]: I1210 12:32:10.493879 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fwx7br"] Dec 10 12:32:10 crc kubenswrapper[4689]: I1210 12:32:10.836519 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gk2zb" event={"ID":"c70e0866-b017-4945-9e4b-c69eec327948","Type":"ContainerStarted","Data":"54aadf76724ac88672102fcca81c58d4d1b8c4331ccd3f53c57cfbb68ea01016"} Dec 10 12:32:10 crc kubenswrapper[4689]: I1210 12:32:10.837033 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gk2zb" Dec 10 12:32:10 crc kubenswrapper[4689]: I1210 12:32:10.838507 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-46w9f" event={"ID":"ec9c74bb-c8dc-409b-817c-74963a395df8","Type":"ContainerStarted","Data":"880dca57816a35c80e9202da68ca88fb04f9e7c06ba358d93dc969d33c8b4cab"} Dec 10 12:32:10 crc kubenswrapper[4689]: I1210 12:32:10.838664 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-46w9f" Dec 10 12:32:10 crc kubenswrapper[4689]: I1210 12:32:10.839608 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gk2zb" Dec 10 12:32:10 crc kubenswrapper[4689]: I1210 12:32:10.840326 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-8xrb5" event={"ID":"9e2487e5-677b-4344-9ab2-d419e03876f2","Type":"ContainerStarted","Data":"6e5ba9a9bf293708bb05494707e4a51db5e6f6d85c1234cd6bf6d6a380039084"} Dec 10 12:32:10 crc kubenswrapper[4689]: I1210 12:32:10.840377 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-46w9f" Dec 10 12:32:10 crc kubenswrapper[4689]: I1210 12:32:10.842764 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-tzbn5" event={"ID":"670989a1-8b21-473a-8624-862930a7d70b","Type":"ContainerStarted","Data":"9fff8fdcdba8fd078b11859ab3dbb40dd6c08db081314be3fa73ad7a363d00df"} Dec 10 12:32:10 crc kubenswrapper[4689]: I1210 12:32:10.842819 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-tzbn5" Dec 10 12:32:10 crc kubenswrapper[4689]: I1210 12:32:10.844574 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-tzbn5" Dec 10 12:32:10 crc kubenswrapper[4689]: I1210 12:32:10.846246 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-dtclt" event={"ID":"0b413153-162d-46ad-9b9a-b44869127ee7","Type":"ContainerStarted","Data":"b7c03c2896fd34f79830620c7f94f90e0385334debc4aad1d8c18fe4e3ee9c6b"} Dec 10 12:32:10 crc kubenswrapper[4689]: I1210 12:32:10.846518 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-dtclt" Dec 10 12:32:10 crc kubenswrapper[4689]: I1210 12:32:10.847897 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-dtclt" Dec 10 12:32:10 crc kubenswrapper[4689]: I1210 12:32:10.847920 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4hdrw" event={"ID":"e6db2b03-cb28-4161-bae6-6eecce28c871","Type":"ContainerStarted","Data":"023853264ba3dd8e1415690ca686a612e2b405f9a285fb4bfe28b75b9fccf341"} Dec 10 12:32:10 crc kubenswrapper[4689]: I1210 12:32:10.848126 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4hdrw" Dec 10 12:32:10 crc kubenswrapper[4689]: I1210 12:32:10.850459 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4hdrw" Dec 10 12:32:10 crc kubenswrapper[4689]: I1210 12:32:10.855564 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gk2zb" podStartSLOduration=2.832651593 podStartE2EDuration="17.855550073s" podCreationTimestamp="2025-12-10 12:31:53 +0000 UTC" firstStartedPulling="2025-12-10 12:31:54.841257642 +0000 UTC m=+982.629338770" lastFinishedPulling="2025-12-10 12:32:09.864156112 +0000 UTC m=+997.652237250" observedRunningTime="2025-12-10 12:32:10.850631401 +0000 UTC m=+998.638712539" watchObservedRunningTime="2025-12-10 12:32:10.855550073 +0000 UTC m=+998.643631211" Dec 10 12:32:10 crc kubenswrapper[4689]: I1210 12:32:10.875995 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-tzbn5" podStartSLOduration=2.943848002 podStartE2EDuration="17.875964451s" podCreationTimestamp="2025-12-10 12:31:53 +0000 UTC" firstStartedPulling="2025-12-10 12:31:55.052287807 +0000 UTC m=+982.840368935" lastFinishedPulling="2025-12-10 12:32:09.984404256 +0000 UTC m=+997.772485384" observedRunningTime="2025-12-10 12:32:10.873950212 +0000 UTC m=+998.662031360" watchObservedRunningTime="2025-12-10 12:32:10.875964451 +0000 UTC m=+998.664045579" Dec 10 12:32:10 crc kubenswrapper[4689]: I1210 12:32:10.914358 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-dtclt" podStartSLOduration=2.736961622 podStartE2EDuration="17.914342737s" podCreationTimestamp="2025-12-10 12:31:53 +0000 UTC" firstStartedPulling="2025-12-10 12:31:54.735694155 +0000 UTC m=+982.523775293" lastFinishedPulling="2025-12-10 12:32:09.91307527 +0000 UTC m=+997.701156408" observedRunningTime="2025-12-10 12:32:10.909310522 +0000 UTC m=+998.697391660" watchObservedRunningTime="2025-12-10 12:32:10.914342737 +0000 UTC m=+998.702423875" Dec 10 12:32:10 crc kubenswrapper[4689]: I1210 12:32:10.934730 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-46w9f" podStartSLOduration=2.876016223 podStartE2EDuration="17.934709044s" podCreationTimestamp="2025-12-10 12:31:53 +0000 UTC" firstStartedPulling="2025-12-10 12:31:54.841548589 +0000 UTC m=+982.629629717" lastFinishedPulling="2025-12-10 12:32:09.9002414 +0000 UTC m=+997.688322538" observedRunningTime="2025-12-10 12:32:10.9317349 +0000 UTC m=+998.719816038" watchObservedRunningTime="2025-12-10 12:32:10.934709044 +0000 UTC m=+998.722790182" Dec 10 12:32:10 crc kubenswrapper[4689]: I1210 12:32:10.949506 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4hdrw" podStartSLOduration=2.832571412 podStartE2EDuration="17.949488702s" podCreationTimestamp="2025-12-10 12:31:53 +0000 UTC" firstStartedPulling="2025-12-10 12:31:54.735155791 +0000 UTC m=+982.523236929" lastFinishedPulling="2025-12-10 12:32:09.852073081 +0000 UTC m=+997.640154219" observedRunningTime="2025-12-10 12:32:10.948435786 +0000 UTC m=+998.736516914" watchObservedRunningTime="2025-12-10 12:32:10.949488702 +0000 UTC m=+998.737569840" Dec 10 12:32:10 crc kubenswrapper[4689]: I1210 12:32:10.964805 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-8xrb5" podStartSLOduration=3.349847041 podStartE2EDuration="17.964771933s" podCreationTimestamp="2025-12-10 12:31:53 +0000 UTC" firstStartedPulling="2025-12-10 12:31:55.278118378 +0000 UTC m=+983.066199516" lastFinishedPulling="2025-12-10 12:32:09.89304327 +0000 UTC m=+997.681124408" observedRunningTime="2025-12-10 12:32:10.961985403 +0000 UTC m=+998.750066541" watchObservedRunningTime="2025-12-10 12:32:10.964771933 +0000 UTC m=+998.752853071" Dec 10 12:32:11 crc kubenswrapper[4689]: W1210 12:32:11.188021 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ab4495a_ff87_4ae7_b343_17ff4dcfeb5a.slice/crio-5eabd311eb61c87f08d38f5287f215c142874b1500681d03dcf701aac19d8791 WatchSource:0}: Error finding container 5eabd311eb61c87f08d38f5287f215c142874b1500681d03dcf701aac19d8791: Status 404 returned error can't find the container with id 5eabd311eb61c87f08d38f5287f215c142874b1500681d03dcf701aac19d8791 Dec 10 12:32:11 crc kubenswrapper[4689]: I1210 12:32:11.863987 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-rwg47" event={"ID":"8209377b-970c-4faf-ac5b-1e429d2bdccd","Type":"ContainerStarted","Data":"dbe6074a280d44c82ae1446ffb9cce04f9e6dc0ad741ceff6e029acde8e0f168"} Dec 10 12:32:11 crc kubenswrapper[4689]: I1210 12:32:11.864362 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-rwg47" Dec 10 12:32:11 crc kubenswrapper[4689]: I1210 12:32:11.866243 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-rwg47" Dec 10 12:32:11 crc kubenswrapper[4689]: I1210 12:32:11.868534 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-44v2s" event={"ID":"e616a259-dbf9-469a-987e-b3a6f36044a4","Type":"ContainerStarted","Data":"cfe19b4967d29103e68b7ca42c4e10c8e3bef51f8b5c2415fa306209643f57e4"} Dec 10 12:32:11 crc kubenswrapper[4689]: I1210 12:32:11.877134 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69f4484999-lb8n4" event={"ID":"e1c471e3-8ecc-4db9-95c0-a4a13e287aba","Type":"ContainerStarted","Data":"e7013d326442fbceb25161ca22b28c186c44b1df989d6e6d101ca2de13f6ac63"} Dec 10 12:32:11 crc kubenswrapper[4689]: I1210 12:32:11.877324 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-69f4484999-lb8n4" Dec 10 12:32:11 crc kubenswrapper[4689]: I1210 12:32:11.881504 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fwx7br" event={"ID":"3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a","Type":"ContainerStarted","Data":"5eabd311eb61c87f08d38f5287f215c142874b1500681d03dcf701aac19d8791"} Dec 10 12:32:11 crc kubenswrapper[4689]: I1210 12:32:11.881795 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-69f4484999-lb8n4" Dec 10 12:32:11 crc kubenswrapper[4689]: I1210 12:32:11.895447 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-rwg47" podStartSLOduration=3.9063413049999998 podStartE2EDuration="18.895430083s" podCreationTimestamp="2025-12-10 12:31:53 +0000 UTC" firstStartedPulling="2025-12-10 12:31:55.069443663 +0000 UTC m=+982.857524801" lastFinishedPulling="2025-12-10 12:32:10.058532441 +0000 UTC m=+997.846613579" observedRunningTime="2025-12-10 12:32:11.889917455 +0000 UTC m=+999.677998603" watchObservedRunningTime="2025-12-10 12:32:11.895430083 +0000 UTC m=+999.683511221" Dec 10 12:32:11 crc kubenswrapper[4689]: I1210 12:32:11.897647 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lkvwb" event={"ID":"13ec50ac-3e46-4615-88f9-070c7a647158","Type":"ContainerStarted","Data":"18f4ed4d125179661fde96728d4c80f5fd3632743166a3816b76d429d8bb6d2d"} Dec 10 12:32:11 crc kubenswrapper[4689]: I1210 12:32:11.898079 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-8xrb5" Dec 10 12:32:11 crc kubenswrapper[4689]: I1210 12:32:11.905457 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-69f4484999-lb8n4" podStartSLOduration=3.895402683 podStartE2EDuration="18.905447852s" podCreationTimestamp="2025-12-10 12:31:53 +0000 UTC" firstStartedPulling="2025-12-10 12:31:55.052264506 +0000 UTC m=+982.840345644" lastFinishedPulling="2025-12-10 12:32:10.062309675 +0000 UTC m=+997.850390813" observedRunningTime="2025-12-10 12:32:11.904375615 +0000 UTC m=+999.692456753" watchObservedRunningTime="2025-12-10 12:32:11.905447852 +0000 UTC m=+999.693528990" Dec 10 12:32:11 crc kubenswrapper[4689]: I1210 12:32:11.911386 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-8xrb5" Dec 10 12:32:13 crc kubenswrapper[4689]: I1210 12:32:13.997860 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-59fd99cc6f-gdgcj" Dec 10 12:32:14 crc kubenswrapper[4689]: I1210 12:32:14.186178 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6fmnw" Dec 10 12:32:21 crc kubenswrapper[4689]: E1210 12:32:21.725436 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" Dec 10 12:32:21 crc kubenswrapper[4689]: E1210 12:32:21.726117 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5pcjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-v7wvl_openstack-operators(8c68b5ee-e36f-428f-8b70-480581f7e120): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 12:32:21 crc kubenswrapper[4689]: I1210 12:32:21.990551 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-66xh9" event={"ID":"7ca7fc40-0bb8-402f-9e73-d1d267340b28","Type":"ContainerStarted","Data":"819031c7aca8362d0d5983683258d9d2ad650981eecd669904e395eb72eb4b8f"} Dec 10 12:32:21 crc kubenswrapper[4689]: I1210 12:32:21.991898 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-66xh9" Dec 10 12:32:21 crc kubenswrapper[4689]: I1210 12:32:21.994457 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-n69ks" event={"ID":"4dc52122-5456-453a-9d5f-d2fce910bb61","Type":"ContainerStarted","Data":"a38a00fa0a39845e895c7fd569224704dc8830d5140b6aed7bd897b1d3782af9"} Dec 10 12:32:21 crc kubenswrapper[4689]: I1210 12:32:21.994847 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-n69ks" Dec 10 12:32:22 crc kubenswrapper[4689]: I1210 12:32:22.004194 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-66xh9" Dec 10 12:32:22 crc kubenswrapper[4689]: I1210 12:32:22.006689 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-n69ks" Dec 10 12:32:22 crc kubenswrapper[4689]: I1210 12:32:22.016248 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-66xh9" podStartSLOduration=12.793115798 podStartE2EDuration="29.016222518s" podCreationTimestamp="2025-12-10 12:31:53 +0000 UTC" firstStartedPulling="2025-12-10 12:31:55.045084397 +0000 UTC m=+982.833165535" lastFinishedPulling="2025-12-10 12:32:11.268191107 +0000 UTC m=+999.056272255" observedRunningTime="2025-12-10 12:32:22.005019839 +0000 UTC m=+1009.793100987" watchObservedRunningTime="2025-12-10 12:32:22.016222518 +0000 UTC m=+1009.804303676" Dec 10 12:32:22 crc kubenswrapper[4689]: I1210 12:32:22.036053 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-n69ks" podStartSLOduration=12.777556142 podStartE2EDuration="29.036010971s" podCreationTimestamp="2025-12-10 12:31:53 +0000 UTC" firstStartedPulling="2025-12-10 12:31:55.009759168 +0000 UTC m=+982.797840306" lastFinishedPulling="2025-12-10 12:32:11.268213997 +0000 UTC m=+999.056295135" observedRunningTime="2025-12-10 12:32:22.032092823 +0000 UTC m=+1009.820173961" watchObservedRunningTime="2025-12-10 12:32:22.036010971 +0000 UTC m=+1009.824092109" Dec 10 12:32:22 crc kubenswrapper[4689]: E1210 12:32:22.468129 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a" Dec 10 12:32:22 crc kubenswrapper[4689]: E1210 12:32:22.468395 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bzs8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-75944c9b7-rq8x6_openstack-operators(88c6fb38-fa6d-497e-87c7-32833f1b5a04): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 12:32:26 crc kubenswrapper[4689]: I1210 12:32:26.077535 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-metrics-certs\") pod \"openstack-operator-controller-manager-5997c5ddf6-h459p\" (UID: \"af7cc69a-a411-43ec-b32e-41e6a343388b\") " pod="openstack-operators/openstack-operator-controller-manager-5997c5ddf6-h459p" Dec 10 12:32:26 crc kubenswrapper[4689]: I1210 12:32:26.077954 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-webhook-certs\") pod \"openstack-operator-controller-manager-5997c5ddf6-h459p\" (UID: \"af7cc69a-a411-43ec-b32e-41e6a343388b\") " pod="openstack-operators/openstack-operator-controller-manager-5997c5ddf6-h459p" Dec 10 12:32:26 crc kubenswrapper[4689]: I1210 12:32:26.084283 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-metrics-certs\") pod \"openstack-operator-controller-manager-5997c5ddf6-h459p\" (UID: \"af7cc69a-a411-43ec-b32e-41e6a343388b\") " pod="openstack-operators/openstack-operator-controller-manager-5997c5ddf6-h459p" Dec 10 12:32:26 crc kubenswrapper[4689]: I1210 12:32:26.095842 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/af7cc69a-a411-43ec-b32e-41e6a343388b-webhook-certs\") pod \"openstack-operator-controller-manager-5997c5ddf6-h459p\" (UID: \"af7cc69a-a411-43ec-b32e-41e6a343388b\") " pod="openstack-operators/openstack-operator-controller-manager-5997c5ddf6-h459p" Dec 10 12:32:26 crc kubenswrapper[4689]: I1210 12:32:26.163304 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5997c5ddf6-h459p" Dec 10 12:32:27 crc kubenswrapper[4689]: I1210 12:32:27.280332 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5997c5ddf6-h459p"] Dec 10 12:32:27 crc kubenswrapper[4689]: W1210 12:32:27.291774 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf7cc69a_a411_43ec_b32e_41e6a343388b.slice/crio-62f472132c1d26f5f2292505151ec9733f0154bb461c7a5be82467d003634206 WatchSource:0}: Error finding container 62f472132c1d26f5f2292505151ec9733f0154bb461c7a5be82467d003634206: Status 404 returned error can't find the container with id 62f472132c1d26f5f2292505151ec9733f0154bb461c7a5be82467d003634206 Dec 10 12:32:27 crc kubenswrapper[4689]: E1210 12:32:27.373899 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rq8x6" podUID="88c6fb38-fa6d-497e-87c7-32833f1b5a04" Dec 10 12:32:27 crc kubenswrapper[4689]: E1210 12:32:27.511298 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v7wvl" podUID="8c68b5ee-e36f-428f-8b70-480581f7e120" Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.051867 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lkvwb" event={"ID":"13ec50ac-3e46-4615-88f9-070c7a647158","Type":"ContainerStarted","Data":"853f02c33d5d63b82fd451d89d924a85bed7d7ab52292e62888e0e81398d3d67"} Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.051914 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lkvwb" event={"ID":"13ec50ac-3e46-4615-88f9-070c7a647158","Type":"ContainerStarted","Data":"64d21e3feb0a69567271a00aa150f2e7f5d62f1226a9b3490a83de3e96c84399"} Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.052790 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lkvwb" Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.054501 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-qlxtq" event={"ID":"dd93e8ba-afd7-4d03-917f-873352cfefc8","Type":"ContainerStarted","Data":"5868a8574935db9e979c98862ad9697a0a5b2354269e2ecbb2d4cc62a9c36fb2"} Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.054528 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-qlxtq" event={"ID":"dd93e8ba-afd7-4d03-917f-873352cfefc8","Type":"ContainerStarted","Data":"cfc8d714c0228cfea05efee7aeba41e938f03a7e57e17cef1b16aefc492c5c4e"} Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.054942 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-qlxtq" Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.057356 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6fmnw" event={"ID":"e9f4ae72-b49e-4144-a49b-72c2bbd1b77c","Type":"ContainerStarted","Data":"93cd2985954b15c394b69de3c588c974249aee7516ad6603d696b1edb1866462"} Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.062051 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fwx7br" event={"ID":"3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a","Type":"ContainerStarted","Data":"873c3478173b1db15af12f09b1f39d6ba0665835d0a40f5eda3eb6974b537deb"} Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.062083 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fwx7br" event={"ID":"3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a","Type":"ContainerStarted","Data":"f623306354dcecdbb2fee5e7ef50625106b2373ee43353b049b7aa1ed8fad3f6"} Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.062505 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fwx7br" Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.069303 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-d995k" event={"ID":"08c0b74f-cdff-4fc6-b2cf-4d61fdd4177d","Type":"ContainerStarted","Data":"39cbd5317d7726f5c29263a7bb7d5b32bcb233e024adcfb1aff50f68477744e9"} Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.069329 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-d995k" event={"ID":"08c0b74f-cdff-4fc6-b2cf-4d61fdd4177d","Type":"ContainerStarted","Data":"71748e163caa3470510ed4e8e3331b0f1532976696e48a6a9f8cf870720ff2e3"} Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.069756 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-d995k" Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.075273 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-44v2s" event={"ID":"e616a259-dbf9-469a-987e-b3a6f36044a4","Type":"ContainerStarted","Data":"9e5ae1fdeb69fd9e31873419ae4aff365fb24e18a68b8aa4a7167534e9f7f7cd"} Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.075764 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lkvwb" podStartSLOduration=19.422681149 podStartE2EDuration="35.075740296s" podCreationTimestamp="2025-12-10 12:31:53 +0000 UTC" firstStartedPulling="2025-12-10 12:32:11.172455063 +0000 UTC m=+998.960536191" lastFinishedPulling="2025-12-10 12:32:26.82551416 +0000 UTC m=+1014.613595338" observedRunningTime="2025-12-10 12:32:28.070366922 +0000 UTC m=+1015.858448060" watchObservedRunningTime="2025-12-10 12:32:28.075740296 +0000 UTC m=+1015.863821434" Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.076296 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-44v2s" Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.078904 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v7wvl" event={"ID":"8c68b5ee-e36f-428f-8b70-480581f7e120","Type":"ContainerStarted","Data":"01995328f759829a400cb7b4fd7ece3c788402ec8890359e9efb2a1bf3709557"} Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.082998 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-44v2s" Dec 10 12:32:28 crc kubenswrapper[4689]: E1210 12:32:28.083728 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v7wvl" podUID="8c68b5ee-e36f-428f-8b70-480581f7e120" Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.088216 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pwvc2" event={"ID":"04286780-356c-4f76-9168-5a80c36d2aa3","Type":"ContainerStarted","Data":"2373c5de02154c04a365e2f51c6be9b888b5b7047d11446ae33e8c685d0d5e6f"} Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.088257 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pwvc2" event={"ID":"04286780-356c-4f76-9168-5a80c36d2aa3","Type":"ContainerStarted","Data":"601c8c79b7f9ea69403e9c7dd8e093677cb063ab42f90f2b3c9f214e21b80af3"} Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.088723 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pwvc2" Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.090343 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5997c5ddf6-h459p" event={"ID":"af7cc69a-a411-43ec-b32e-41e6a343388b","Type":"ContainerStarted","Data":"d1c19ac0983cfdd8dfe252efe445c0fb5b284548125a426f0eaf7a5324ca7ee1"} Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.090394 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5997c5ddf6-h459p" event={"ID":"af7cc69a-a411-43ec-b32e-41e6a343388b","Type":"ContainerStarted","Data":"62f472132c1d26f5f2292505151ec9733f0154bb461c7a5be82467d003634206"} Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.091396 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5997c5ddf6-h459p" Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.093656 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-59fd99cc6f-gdgcj" event={"ID":"c346bbde-239c-4f76-91da-c4116ad0a487","Type":"ContainerStarted","Data":"f6c3a6cd592eab80b6e20777d90e7880bcbe2b269a9891ab470834c608d2f3c3"} Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.095055 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rq8x6" event={"ID":"88c6fb38-fa6d-497e-87c7-32833f1b5a04","Type":"ContainerStarted","Data":"8a0cc271d0fe1964bb18f4dc1c4d7fa208c289ab0b87f68ab3885f27c07c3855"} Dec 10 12:32:28 crc kubenswrapper[4689]: E1210 12:32:28.097816 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rq8x6" podUID="88c6fb38-fa6d-497e-87c7-32833f1b5a04" Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.098406 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fghxw" event={"ID":"20ec28ba-f929-4e94-833b-24a213da89a6","Type":"ContainerStarted","Data":"28115d48c8bce4d96d7a6d39cdf19f3d23e3ea9719b684c1dea1c23c52368ebf"} Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.098445 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fghxw" event={"ID":"20ec28ba-f929-4e94-833b-24a213da89a6","Type":"ContainerStarted","Data":"bb72b689020cbcfc6b932eb4306feaa67907050f9889fc289a60e37155bc462e"} Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.098600 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fghxw" Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.103807 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-qlxtq" podStartSLOduration=3.569894818 podStartE2EDuration="35.103791564s" podCreationTimestamp="2025-12-10 12:31:53 +0000 UTC" firstStartedPulling="2025-12-10 12:31:55.292053395 +0000 UTC m=+983.080134533" lastFinishedPulling="2025-12-10 12:32:26.825950131 +0000 UTC m=+1014.614031279" observedRunningTime="2025-12-10 12:32:28.099041495 +0000 UTC m=+1015.887122623" watchObservedRunningTime="2025-12-10 12:32:28.103791564 +0000 UTC m=+1015.891872702" Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.172591 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-d995k" podStartSLOduration=3.826886698 podStartE2EDuration="35.172575737s" podCreationTimestamp="2025-12-10 12:31:53 +0000 UTC" firstStartedPulling="2025-12-10 12:31:55.481923243 +0000 UTC m=+983.270004381" lastFinishedPulling="2025-12-10 12:32:26.827612272 +0000 UTC m=+1014.615693420" observedRunningTime="2025-12-10 12:32:28.167587782 +0000 UTC m=+1015.955668920" watchObservedRunningTime="2025-12-10 12:32:28.172575737 +0000 UTC m=+1015.960656875" Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.173448 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fwx7br" podStartSLOduration=19.558269985 podStartE2EDuration="35.173439518s" podCreationTimestamp="2025-12-10 12:31:53 +0000 UTC" firstStartedPulling="2025-12-10 12:32:11.241597945 +0000 UTC m=+999.029679093" lastFinishedPulling="2025-12-10 12:32:26.856767458 +0000 UTC m=+1014.644848626" observedRunningTime="2025-12-10 12:32:28.147121153 +0000 UTC m=+1015.935202311" watchObservedRunningTime="2025-12-10 12:32:28.173439518 +0000 UTC m=+1015.961520656" Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.208068 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-6fmnw" podStartSLOduration=25.054793203 podStartE2EDuration="35.208051909s" podCreationTimestamp="2025-12-10 12:31:53 +0000 UTC" firstStartedPulling="2025-12-10 12:31:55.279441681 +0000 UTC m=+983.067522809" lastFinishedPulling="2025-12-10 12:32:05.432700377 +0000 UTC m=+993.220781515" observedRunningTime="2025-12-10 12:32:28.205902817 +0000 UTC m=+1015.993983965" watchObservedRunningTime="2025-12-10 12:32:28.208051909 +0000 UTC m=+1015.996133047" Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.274815 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5997c5ddf6-h459p" podStartSLOduration=34.274799082 podStartE2EDuration="34.274799082s" podCreationTimestamp="2025-12-10 12:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:32:28.272130475 +0000 UTC m=+1016.060211613" watchObservedRunningTime="2025-12-10 12:32:28.274799082 +0000 UTC m=+1016.062880220" Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.336810 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fghxw" podStartSLOduration=3.804785746 podStartE2EDuration="35.336792615s" podCreationTimestamp="2025-12-10 12:31:53 +0000 UTC" firstStartedPulling="2025-12-10 12:31:55.292132747 +0000 UTC m=+983.080213885" lastFinishedPulling="2025-12-10 12:32:26.824139576 +0000 UTC m=+1014.612220754" observedRunningTime="2025-12-10 12:32:28.33378405 +0000 UTC m=+1016.121865188" watchObservedRunningTime="2025-12-10 12:32:28.336792615 +0000 UTC m=+1016.124873743" Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.359400 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-59fd99cc6f-gdgcj" podStartSLOduration=24.865169143 podStartE2EDuration="35.359379507s" podCreationTimestamp="2025-12-10 12:31:53 +0000 UTC" firstStartedPulling="2025-12-10 12:31:55.07008469 +0000 UTC m=+982.858165828" lastFinishedPulling="2025-12-10 12:32:05.564295054 +0000 UTC m=+993.352376192" observedRunningTime="2025-12-10 12:32:28.351783778 +0000 UTC m=+1016.139864916" watchObservedRunningTime="2025-12-10 12:32:28.359379507 +0000 UTC m=+1016.147460645" Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.375231 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pwvc2" podStartSLOduration=4.7413575439999995 podStartE2EDuration="35.375213402s" podCreationTimestamp="2025-12-10 12:31:53 +0000 UTC" firstStartedPulling="2025-12-10 12:31:55.2918158 +0000 UTC m=+983.079896938" lastFinishedPulling="2025-12-10 12:32:25.925671658 +0000 UTC m=+1013.713752796" observedRunningTime="2025-12-10 12:32:28.366825483 +0000 UTC m=+1016.154906621" watchObservedRunningTime="2025-12-10 12:32:28.375213402 +0000 UTC m=+1016.163294540" Dec 10 12:32:28 crc kubenswrapper[4689]: I1210 12:32:28.394830 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-44v2s" podStartSLOduration=19.423993371 podStartE2EDuration="35.3948094s" podCreationTimestamp="2025-12-10 12:31:53 +0000 UTC" firstStartedPulling="2025-12-10 12:31:55.295874971 +0000 UTC m=+983.083956109" lastFinishedPulling="2025-12-10 12:32:11.266691 +0000 UTC m=+999.054772138" observedRunningTime="2025-12-10 12:32:28.392449801 +0000 UTC m=+1016.180530939" watchObservedRunningTime="2025-12-10 12:32:28.3948094 +0000 UTC m=+1016.182890548" Dec 10 12:32:34 crc kubenswrapper[4689]: I1210 12:32:34.279654 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-qlxtq" Dec 10 12:32:34 crc kubenswrapper[4689]: I1210 12:32:34.290390 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pwvc2" Dec 10 12:32:34 crc kubenswrapper[4689]: I1210 12:32:34.321481 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fghxw" Dec 10 12:32:34 crc kubenswrapper[4689]: I1210 12:32:34.434132 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-d995k" Dec 10 12:32:36 crc kubenswrapper[4689]: I1210 12:32:36.170918 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5997c5ddf6-h459p" Dec 10 12:32:38 crc kubenswrapper[4689]: E1210 12:32:38.514718 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v7wvl" podUID="8c68b5ee-e36f-428f-8b70-480581f7e120" Dec 10 12:32:39 crc kubenswrapper[4689]: I1210 12:32:39.553874 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lkvwb" Dec 10 12:32:39 crc kubenswrapper[4689]: I1210 12:32:39.961941 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fwx7br" Dec 10 12:32:41 crc kubenswrapper[4689]: E1210 12:32:41.500651 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rq8x6" podUID="88c6fb38-fa6d-497e-87c7-32833f1b5a04" Dec 10 12:32:49 crc kubenswrapper[4689]: I1210 12:32:49.506334 4689 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 12:32:50 crc kubenswrapper[4689]: I1210 12:32:50.354865 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v7wvl" event={"ID":"8c68b5ee-e36f-428f-8b70-480581f7e120","Type":"ContainerStarted","Data":"5e15cec4c19226cf9f7f61d457584cafc9cb956f721d1840ef7cd106de3c35d7"} Dec 10 12:32:50 crc kubenswrapper[4689]: I1210 12:32:50.355427 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v7wvl" Dec 10 12:32:50 crc kubenswrapper[4689]: I1210 12:32:50.379382 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v7wvl" podStartSLOduration=2.827926607 podStartE2EDuration="57.379359534s" podCreationTimestamp="2025-12-10 12:31:53 +0000 UTC" firstStartedPulling="2025-12-10 12:31:55.479124833 +0000 UTC m=+983.267205971" lastFinishedPulling="2025-12-10 12:32:50.03055774 +0000 UTC m=+1037.818638898" observedRunningTime="2025-12-10 12:32:50.374540194 +0000 UTC m=+1038.162621332" watchObservedRunningTime="2025-12-10 12:32:50.379359534 +0000 UTC m=+1038.167440712" Dec 10 12:32:57 crc kubenswrapper[4689]: I1210 12:32:57.410367 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rq8x6" event={"ID":"88c6fb38-fa6d-497e-87c7-32833f1b5a04","Type":"ContainerStarted","Data":"28e59fc8c38e7cbe86f582ba3dd1e3ee3f1f59fc3f483266a9545ada916f2ef1"} Dec 10 12:32:57 crc kubenswrapper[4689]: I1210 12:32:57.411082 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rq8x6" Dec 10 12:32:57 crc kubenswrapper[4689]: I1210 12:32:57.432459 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rq8x6" podStartSLOduration=2.164300471 podStartE2EDuration="1m3.432442348s" podCreationTimestamp="2025-12-10 12:31:54 +0000 UTC" firstStartedPulling="2025-12-10 12:31:55.292355503 +0000 UTC m=+983.080436641" lastFinishedPulling="2025-12-10 12:32:56.56049737 +0000 UTC m=+1044.348578518" observedRunningTime="2025-12-10 12:32:57.429758811 +0000 UTC m=+1045.217839949" watchObservedRunningTime="2025-12-10 12:32:57.432442348 +0000 UTC m=+1045.220523496" Dec 10 12:33:04 crc kubenswrapper[4689]: I1210 12:33:04.325832 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v7wvl" Dec 10 12:33:04 crc kubenswrapper[4689]: I1210 12:33:04.637269 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rq8x6" Dec 10 12:33:21 crc kubenswrapper[4689]: I1210 12:33:21.490071 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9pg5q"] Dec 10 12:33:21 crc kubenswrapper[4689]: I1210 12:33:21.491687 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9pg5q" Dec 10 12:33:21 crc kubenswrapper[4689]: I1210 12:33:21.493838 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 10 12:33:21 crc kubenswrapper[4689]: I1210 12:33:21.494060 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-9842m" Dec 10 12:33:21 crc kubenswrapper[4689]: I1210 12:33:21.494094 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 10 12:33:21 crc kubenswrapper[4689]: I1210 12:33:21.494411 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 10 12:33:21 crc kubenswrapper[4689]: I1210 12:33:21.510938 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9pg5q"] Dec 10 12:33:21 crc kubenswrapper[4689]: I1210 12:33:21.544609 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-l9v87"] Dec 10 12:33:21 crc kubenswrapper[4689]: I1210 12:33:21.545676 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-l9v87" Dec 10 12:33:21 crc kubenswrapper[4689]: I1210 12:33:21.548459 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 10 12:33:21 crc kubenswrapper[4689]: I1210 12:33:21.568405 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-l9v87"] Dec 10 12:33:21 crc kubenswrapper[4689]: I1210 12:33:21.650914 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqnfb\" (UniqueName: \"kubernetes.io/projected/92d945bf-3c25-4fee-8c13-67bba7fb1a74-kube-api-access-bqnfb\") pod \"dnsmasq-dns-78dd6ddcc-l9v87\" (UID: \"92d945bf-3c25-4fee-8c13-67bba7fb1a74\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l9v87" Dec 10 12:33:21 crc kubenswrapper[4689]: I1210 12:33:21.650986 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7c9w\" (UniqueName: \"kubernetes.io/projected/6f9bb41c-0174-4d0b-9d9b-6a815288ac19-kube-api-access-p7c9w\") pod \"dnsmasq-dns-675f4bcbfc-9pg5q\" (UID: \"6f9bb41c-0174-4d0b-9d9b-6a815288ac19\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9pg5q" Dec 10 12:33:21 crc kubenswrapper[4689]: I1210 12:33:21.651565 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92d945bf-3c25-4fee-8c13-67bba7fb1a74-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-l9v87\" (UID: \"92d945bf-3c25-4fee-8c13-67bba7fb1a74\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l9v87" Dec 10 12:33:21 crc kubenswrapper[4689]: I1210 12:33:21.651620 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f9bb41c-0174-4d0b-9d9b-6a815288ac19-config\") pod \"dnsmasq-dns-675f4bcbfc-9pg5q\" (UID: \"6f9bb41c-0174-4d0b-9d9b-6a815288ac19\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9pg5q" Dec 10 12:33:21 crc kubenswrapper[4689]: I1210 12:33:21.651712 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92d945bf-3c25-4fee-8c13-67bba7fb1a74-config\") pod \"dnsmasq-dns-78dd6ddcc-l9v87\" (UID: \"92d945bf-3c25-4fee-8c13-67bba7fb1a74\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l9v87" Dec 10 12:33:21 crc kubenswrapper[4689]: I1210 12:33:21.752683 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqnfb\" (UniqueName: \"kubernetes.io/projected/92d945bf-3c25-4fee-8c13-67bba7fb1a74-kube-api-access-bqnfb\") pod \"dnsmasq-dns-78dd6ddcc-l9v87\" (UID: \"92d945bf-3c25-4fee-8c13-67bba7fb1a74\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l9v87" Dec 10 12:33:21 crc kubenswrapper[4689]: I1210 12:33:21.752764 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7c9w\" (UniqueName: \"kubernetes.io/projected/6f9bb41c-0174-4d0b-9d9b-6a815288ac19-kube-api-access-p7c9w\") pod \"dnsmasq-dns-675f4bcbfc-9pg5q\" (UID: \"6f9bb41c-0174-4d0b-9d9b-6a815288ac19\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9pg5q" Dec 10 12:33:21 crc kubenswrapper[4689]: I1210 12:33:21.752863 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92d945bf-3c25-4fee-8c13-67bba7fb1a74-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-l9v87\" (UID: \"92d945bf-3c25-4fee-8c13-67bba7fb1a74\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l9v87" Dec 10 12:33:21 crc kubenswrapper[4689]: I1210 12:33:21.752906 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f9bb41c-0174-4d0b-9d9b-6a815288ac19-config\") pod \"dnsmasq-dns-675f4bcbfc-9pg5q\" (UID: \"6f9bb41c-0174-4d0b-9d9b-6a815288ac19\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9pg5q" Dec 10 12:33:21 crc kubenswrapper[4689]: I1210 12:33:21.752953 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92d945bf-3c25-4fee-8c13-67bba7fb1a74-config\") pod \"dnsmasq-dns-78dd6ddcc-l9v87\" (UID: \"92d945bf-3c25-4fee-8c13-67bba7fb1a74\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l9v87" Dec 10 12:33:21 crc kubenswrapper[4689]: I1210 12:33:21.753774 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92d945bf-3c25-4fee-8c13-67bba7fb1a74-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-l9v87\" (UID: \"92d945bf-3c25-4fee-8c13-67bba7fb1a74\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l9v87" Dec 10 12:33:21 crc kubenswrapper[4689]: I1210 12:33:21.754168 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92d945bf-3c25-4fee-8c13-67bba7fb1a74-config\") pod \"dnsmasq-dns-78dd6ddcc-l9v87\" (UID: \"92d945bf-3c25-4fee-8c13-67bba7fb1a74\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l9v87" Dec 10 12:33:21 crc kubenswrapper[4689]: I1210 12:33:21.754494 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f9bb41c-0174-4d0b-9d9b-6a815288ac19-config\") pod \"dnsmasq-dns-675f4bcbfc-9pg5q\" (UID: \"6f9bb41c-0174-4d0b-9d9b-6a815288ac19\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9pg5q" Dec 10 12:33:21 crc kubenswrapper[4689]: I1210 12:33:21.771835 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7c9w\" (UniqueName: \"kubernetes.io/projected/6f9bb41c-0174-4d0b-9d9b-6a815288ac19-kube-api-access-p7c9w\") pod \"dnsmasq-dns-675f4bcbfc-9pg5q\" (UID: \"6f9bb41c-0174-4d0b-9d9b-6a815288ac19\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9pg5q" Dec 10 12:33:21 crc kubenswrapper[4689]: I1210 12:33:21.772464 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqnfb\" (UniqueName: \"kubernetes.io/projected/92d945bf-3c25-4fee-8c13-67bba7fb1a74-kube-api-access-bqnfb\") pod \"dnsmasq-dns-78dd6ddcc-l9v87\" (UID: \"92d945bf-3c25-4fee-8c13-67bba7fb1a74\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l9v87" Dec 10 12:33:21 crc kubenswrapper[4689]: I1210 12:33:21.812528 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9pg5q" Dec 10 12:33:21 crc kubenswrapper[4689]: I1210 12:33:21.860236 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-l9v87" Dec 10 12:33:22 crc kubenswrapper[4689]: I1210 12:33:22.166491 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-l9v87"] Dec 10 12:33:22 crc kubenswrapper[4689]: I1210 12:33:22.300502 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9pg5q"] Dec 10 12:33:22 crc kubenswrapper[4689]: W1210 12:33:22.303840 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f9bb41c_0174_4d0b_9d9b_6a815288ac19.slice/crio-4314911cb1e3c2f0bc32ff3dfd01c01b04c3cd1abf678d852b9e1d21b37cd00e WatchSource:0}: Error finding container 4314911cb1e3c2f0bc32ff3dfd01c01b04c3cd1abf678d852b9e1d21b37cd00e: Status 404 returned error can't find the container with id 4314911cb1e3c2f0bc32ff3dfd01c01b04c3cd1abf678d852b9e1d21b37cd00e Dec 10 12:33:22 crc kubenswrapper[4689]: I1210 12:33:22.607404 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-9pg5q" event={"ID":"6f9bb41c-0174-4d0b-9d9b-6a815288ac19","Type":"ContainerStarted","Data":"4314911cb1e3c2f0bc32ff3dfd01c01b04c3cd1abf678d852b9e1d21b37cd00e"} Dec 10 12:33:22 crc kubenswrapper[4689]: I1210 12:33:22.608295 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-l9v87" event={"ID":"92d945bf-3c25-4fee-8c13-67bba7fb1a74","Type":"ContainerStarted","Data":"7262ed3f3be90a6aba9c4154003c560e53d3e34f63b0d1283023c6a06a5b6092"} Dec 10 12:33:24 crc kubenswrapper[4689]: I1210 12:33:24.738897 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9pg5q"] Dec 10 12:33:24 crc kubenswrapper[4689]: I1210 12:33:24.760455 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hrjxw"] Dec 10 12:33:24 crc kubenswrapper[4689]: I1210 12:33:24.761541 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-hrjxw" Dec 10 12:33:24 crc kubenswrapper[4689]: I1210 12:33:24.770179 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hrjxw"] Dec 10 12:33:24 crc kubenswrapper[4689]: I1210 12:33:24.899553 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3049c9c-fbe7-4bc9-9fbf-7365b98671be-dns-svc\") pod \"dnsmasq-dns-666b6646f7-hrjxw\" (UID: \"b3049c9c-fbe7-4bc9-9fbf-7365b98671be\") " pod="openstack/dnsmasq-dns-666b6646f7-hrjxw" Dec 10 12:33:24 crc kubenswrapper[4689]: I1210 12:33:24.899664 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3049c9c-fbe7-4bc9-9fbf-7365b98671be-config\") pod \"dnsmasq-dns-666b6646f7-hrjxw\" (UID: \"b3049c9c-fbe7-4bc9-9fbf-7365b98671be\") " pod="openstack/dnsmasq-dns-666b6646f7-hrjxw" Dec 10 12:33:24 crc kubenswrapper[4689]: I1210 12:33:24.899688 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vl6l\" (UniqueName: \"kubernetes.io/projected/b3049c9c-fbe7-4bc9-9fbf-7365b98671be-kube-api-access-4vl6l\") pod \"dnsmasq-dns-666b6646f7-hrjxw\" (UID: \"b3049c9c-fbe7-4bc9-9fbf-7365b98671be\") " pod="openstack/dnsmasq-dns-666b6646f7-hrjxw" Dec 10 12:33:25 crc kubenswrapper[4689]: I1210 12:33:25.001206 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3049c9c-fbe7-4bc9-9fbf-7365b98671be-dns-svc\") pod \"dnsmasq-dns-666b6646f7-hrjxw\" (UID: \"b3049c9c-fbe7-4bc9-9fbf-7365b98671be\") " pod="openstack/dnsmasq-dns-666b6646f7-hrjxw" Dec 10 12:33:25 crc kubenswrapper[4689]: I1210 12:33:25.001331 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3049c9c-fbe7-4bc9-9fbf-7365b98671be-config\") pod \"dnsmasq-dns-666b6646f7-hrjxw\" (UID: \"b3049c9c-fbe7-4bc9-9fbf-7365b98671be\") " pod="openstack/dnsmasq-dns-666b6646f7-hrjxw" Dec 10 12:33:25 crc kubenswrapper[4689]: I1210 12:33:25.001353 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vl6l\" (UniqueName: \"kubernetes.io/projected/b3049c9c-fbe7-4bc9-9fbf-7365b98671be-kube-api-access-4vl6l\") pod \"dnsmasq-dns-666b6646f7-hrjxw\" (UID: \"b3049c9c-fbe7-4bc9-9fbf-7365b98671be\") " pod="openstack/dnsmasq-dns-666b6646f7-hrjxw" Dec 10 12:33:25 crc kubenswrapper[4689]: I1210 12:33:25.002605 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3049c9c-fbe7-4bc9-9fbf-7365b98671be-dns-svc\") pod \"dnsmasq-dns-666b6646f7-hrjxw\" (UID: \"b3049c9c-fbe7-4bc9-9fbf-7365b98671be\") " pod="openstack/dnsmasq-dns-666b6646f7-hrjxw" Dec 10 12:33:25 crc kubenswrapper[4689]: I1210 12:33:25.002654 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3049c9c-fbe7-4bc9-9fbf-7365b98671be-config\") pod \"dnsmasq-dns-666b6646f7-hrjxw\" (UID: \"b3049c9c-fbe7-4bc9-9fbf-7365b98671be\") " pod="openstack/dnsmasq-dns-666b6646f7-hrjxw" Dec 10 12:33:25 crc kubenswrapper[4689]: I1210 12:33:25.042882 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-l9v87"] Dec 10 12:33:25 crc kubenswrapper[4689]: I1210 12:33:25.051162 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vl6l\" (UniqueName: \"kubernetes.io/projected/b3049c9c-fbe7-4bc9-9fbf-7365b98671be-kube-api-access-4vl6l\") pod \"dnsmasq-dns-666b6646f7-hrjxw\" (UID: \"b3049c9c-fbe7-4bc9-9fbf-7365b98671be\") " pod="openstack/dnsmasq-dns-666b6646f7-hrjxw" Dec 10 12:33:25 crc kubenswrapper[4689]: I1210 12:33:25.073040 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-69fpd"] Dec 10 12:33:25 crc kubenswrapper[4689]: I1210 12:33:25.074605 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-69fpd" Dec 10 12:33:25 crc kubenswrapper[4689]: I1210 12:33:25.086332 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-69fpd"] Dec 10 12:33:25 crc kubenswrapper[4689]: I1210 12:33:25.096205 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-hrjxw" Dec 10 12:33:25 crc kubenswrapper[4689]: I1210 12:33:25.204501 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7934f7f2-2d11-43a1-8a79-002d383f8c34-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-69fpd\" (UID: \"7934f7f2-2d11-43a1-8a79-002d383f8c34\") " pod="openstack/dnsmasq-dns-57d769cc4f-69fpd" Dec 10 12:33:25 crc kubenswrapper[4689]: I1210 12:33:25.204557 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw972\" (UniqueName: \"kubernetes.io/projected/7934f7f2-2d11-43a1-8a79-002d383f8c34-kube-api-access-gw972\") pod \"dnsmasq-dns-57d769cc4f-69fpd\" (UID: \"7934f7f2-2d11-43a1-8a79-002d383f8c34\") " pod="openstack/dnsmasq-dns-57d769cc4f-69fpd" Dec 10 12:33:25 crc kubenswrapper[4689]: I1210 12:33:25.204606 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7934f7f2-2d11-43a1-8a79-002d383f8c34-config\") pod \"dnsmasq-dns-57d769cc4f-69fpd\" (UID: \"7934f7f2-2d11-43a1-8a79-002d383f8c34\") " pod="openstack/dnsmasq-dns-57d769cc4f-69fpd" Dec 10 12:33:25 crc kubenswrapper[4689]: I1210 12:33:25.305509 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7934f7f2-2d11-43a1-8a79-002d383f8c34-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-69fpd\" (UID: \"7934f7f2-2d11-43a1-8a79-002d383f8c34\") " pod="openstack/dnsmasq-dns-57d769cc4f-69fpd" Dec 10 12:33:25 crc kubenswrapper[4689]: I1210 12:33:25.305566 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw972\" (UniqueName: \"kubernetes.io/projected/7934f7f2-2d11-43a1-8a79-002d383f8c34-kube-api-access-gw972\") pod \"dnsmasq-dns-57d769cc4f-69fpd\" (UID: \"7934f7f2-2d11-43a1-8a79-002d383f8c34\") " pod="openstack/dnsmasq-dns-57d769cc4f-69fpd" Dec 10 12:33:25 crc kubenswrapper[4689]: I1210 12:33:25.305655 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7934f7f2-2d11-43a1-8a79-002d383f8c34-config\") pod \"dnsmasq-dns-57d769cc4f-69fpd\" (UID: \"7934f7f2-2d11-43a1-8a79-002d383f8c34\") " pod="openstack/dnsmasq-dns-57d769cc4f-69fpd" Dec 10 12:33:25 crc kubenswrapper[4689]: I1210 12:33:25.306501 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7934f7f2-2d11-43a1-8a79-002d383f8c34-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-69fpd\" (UID: \"7934f7f2-2d11-43a1-8a79-002d383f8c34\") " pod="openstack/dnsmasq-dns-57d769cc4f-69fpd" Dec 10 12:33:25 crc kubenswrapper[4689]: I1210 12:33:25.306882 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7934f7f2-2d11-43a1-8a79-002d383f8c34-config\") pod \"dnsmasq-dns-57d769cc4f-69fpd\" (UID: \"7934f7f2-2d11-43a1-8a79-002d383f8c34\") " pod="openstack/dnsmasq-dns-57d769cc4f-69fpd" Dec 10 12:33:25 crc kubenswrapper[4689]: I1210 12:33:25.323665 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw972\" (UniqueName: \"kubernetes.io/projected/7934f7f2-2d11-43a1-8a79-002d383f8c34-kube-api-access-gw972\") pod \"dnsmasq-dns-57d769cc4f-69fpd\" (UID: \"7934f7f2-2d11-43a1-8a79-002d383f8c34\") " pod="openstack/dnsmasq-dns-57d769cc4f-69fpd" Dec 10 12:33:25 crc kubenswrapper[4689]: I1210 12:33:25.396035 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-69fpd" Dec 10 12:33:25 crc kubenswrapper[4689]: I1210 12:33:25.925772 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 12:33:25 crc kubenswrapper[4689]: I1210 12:33:25.936492 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 10 12:33:25 crc kubenswrapper[4689]: I1210 12:33:25.940044 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 10 12:33:25 crc kubenswrapper[4689]: I1210 12:33:25.940731 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 10 12:33:25 crc kubenswrapper[4689]: I1210 12:33:25.940799 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 10 12:33:25 crc kubenswrapper[4689]: I1210 12:33:25.941310 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 10 12:33:25 crc kubenswrapper[4689]: I1210 12:33:25.941462 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-bwmld" Dec 10 12:33:25 crc kubenswrapper[4689]: I1210 12:33:25.941610 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 10 12:33:25 crc kubenswrapper[4689]: I1210 12:33:25.941848 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 10 12:33:25 crc kubenswrapper[4689]: I1210 12:33:25.945578 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.118086 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/33bee83d-eb0f-4e5e-9617-f8102008436a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.118229 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/33bee83d-eb0f-4e5e-9617-f8102008436a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.118320 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/33bee83d-eb0f-4e5e-9617-f8102008436a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.118360 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/33bee83d-eb0f-4e5e-9617-f8102008436a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.118396 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/33bee83d-eb0f-4e5e-9617-f8102008436a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.118425 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqp6r\" (UniqueName: \"kubernetes.io/projected/33bee83d-eb0f-4e5e-9617-f8102008436a-kube-api-access-dqp6r\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.118582 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/33bee83d-eb0f-4e5e-9617-f8102008436a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.118656 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/33bee83d-eb0f-4e5e-9617-f8102008436a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.118693 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.118733 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/33bee83d-eb0f-4e5e-9617-f8102008436a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.118761 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/33bee83d-eb0f-4e5e-9617-f8102008436a-config-data\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.219770 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.225485 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/33bee83d-eb0f-4e5e-9617-f8102008436a-config-data\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.225647 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/33bee83d-eb0f-4e5e-9617-f8102008436a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.225961 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/33bee83d-eb0f-4e5e-9617-f8102008436a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.226257 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/33bee83d-eb0f-4e5e-9617-f8102008436a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.226289 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/33bee83d-eb0f-4e5e-9617-f8102008436a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.226615 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/33bee83d-eb0f-4e5e-9617-f8102008436a-config-data\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.226808 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/33bee83d-eb0f-4e5e-9617-f8102008436a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.227234 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/33bee83d-eb0f-4e5e-9617-f8102008436a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.227269 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqp6r\" (UniqueName: \"kubernetes.io/projected/33bee83d-eb0f-4e5e-9617-f8102008436a-kube-api-access-dqp6r\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.227316 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/33bee83d-eb0f-4e5e-9617-f8102008436a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.227748 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/33bee83d-eb0f-4e5e-9617-f8102008436a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.227787 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.227818 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/33bee83d-eb0f-4e5e-9617-f8102008436a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.228615 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/33bee83d-eb0f-4e5e-9617-f8102008436a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.229140 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/33bee83d-eb0f-4e5e-9617-f8102008436a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.229423 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.230344 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/33bee83d-eb0f-4e5e-9617-f8102008436a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.233230 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/33bee83d-eb0f-4e5e-9617-f8102008436a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.233782 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/33bee83d-eb0f-4e5e-9617-f8102008436a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.236734 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/33bee83d-eb0f-4e5e-9617-f8102008436a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.242704 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.246856 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.247173 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.247583 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.247999 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.248223 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.248655 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-5ps56" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.250431 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/33bee83d-eb0f-4e5e-9617-f8102008436a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.256347 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.257759 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqp6r\" (UniqueName: \"kubernetes.io/projected/33bee83d-eb0f-4e5e-9617-f8102008436a-kube-api-access-dqp6r\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.261819 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.264177 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.430309 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hq78\" (UniqueName: \"kubernetes.io/projected/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-kube-api-access-7hq78\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.430356 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.430402 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.430461 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.430483 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.430558 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.430900 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.430962 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.431026 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.431152 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.431183 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.532596 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.532648 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.532667 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.532695 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.532716 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.532742 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hq78\" (UniqueName: \"kubernetes.io/projected/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-kube-api-access-7hq78\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.532764 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.532803 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.532826 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.532844 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.532871 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.533709 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.533752 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.533917 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.533959 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.534219 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.534588 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.537882 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.538507 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.538683 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.544580 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.555652 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hq78\" (UniqueName: \"kubernetes.io/projected/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-kube-api-access-7hq78\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.565367 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.576322 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:26 crc kubenswrapper[4689]: I1210 12:33:26.630512 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.666104 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.667621 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.673233 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.713278 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.713399 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.713888 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-llhnp" Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.713913 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.720692 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.851109 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5de970c2-b559-4b8a-86f2-85b07c2292b1-kolla-config\") pod \"openstack-galera-0\" (UID: \"5de970c2-b559-4b8a-86f2-85b07c2292b1\") " pod="openstack/openstack-galera-0" Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.851162 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5de970c2-b559-4b8a-86f2-85b07c2292b1-config-data-default\") pod \"openstack-galera-0\" (UID: \"5de970c2-b559-4b8a-86f2-85b07c2292b1\") " pod="openstack/openstack-galera-0" Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.851194 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"5de970c2-b559-4b8a-86f2-85b07c2292b1\") " pod="openstack/openstack-galera-0" Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.851215 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5de970c2-b559-4b8a-86f2-85b07c2292b1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5de970c2-b559-4b8a-86f2-85b07c2292b1\") " pod="openstack/openstack-galera-0" Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.851244 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5de970c2-b559-4b8a-86f2-85b07c2292b1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5de970c2-b559-4b8a-86f2-85b07c2292b1\") " pod="openstack/openstack-galera-0" Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.851269 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5de970c2-b559-4b8a-86f2-85b07c2292b1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5de970c2-b559-4b8a-86f2-85b07c2292b1\") " pod="openstack/openstack-galera-0" Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.851447 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5de970c2-b559-4b8a-86f2-85b07c2292b1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5de970c2-b559-4b8a-86f2-85b07c2292b1\") " pod="openstack/openstack-galera-0" Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.851576 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd9km\" (UniqueName: \"kubernetes.io/projected/5de970c2-b559-4b8a-86f2-85b07c2292b1-kube-api-access-dd9km\") pod \"openstack-galera-0\" (UID: \"5de970c2-b559-4b8a-86f2-85b07c2292b1\") " pod="openstack/openstack-galera-0" Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.953051 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5de970c2-b559-4b8a-86f2-85b07c2292b1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5de970c2-b559-4b8a-86f2-85b07c2292b1\") " pod="openstack/openstack-galera-0" Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.953121 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5de970c2-b559-4b8a-86f2-85b07c2292b1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5de970c2-b559-4b8a-86f2-85b07c2292b1\") " pod="openstack/openstack-galera-0" Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.953167 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5de970c2-b559-4b8a-86f2-85b07c2292b1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5de970c2-b559-4b8a-86f2-85b07c2292b1\") " pod="openstack/openstack-galera-0" Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.953203 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd9km\" (UniqueName: \"kubernetes.io/projected/5de970c2-b559-4b8a-86f2-85b07c2292b1-kube-api-access-dd9km\") pod \"openstack-galera-0\" (UID: \"5de970c2-b559-4b8a-86f2-85b07c2292b1\") " pod="openstack/openstack-galera-0" Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.953276 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5de970c2-b559-4b8a-86f2-85b07c2292b1-kolla-config\") pod \"openstack-galera-0\" (UID: \"5de970c2-b559-4b8a-86f2-85b07c2292b1\") " pod="openstack/openstack-galera-0" Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.953311 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5de970c2-b559-4b8a-86f2-85b07c2292b1-config-data-default\") pod \"openstack-galera-0\" (UID: \"5de970c2-b559-4b8a-86f2-85b07c2292b1\") " pod="openstack/openstack-galera-0" Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.953345 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"5de970c2-b559-4b8a-86f2-85b07c2292b1\") " pod="openstack/openstack-galera-0" Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.953373 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5de970c2-b559-4b8a-86f2-85b07c2292b1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5de970c2-b559-4b8a-86f2-85b07c2292b1\") " pod="openstack/openstack-galera-0" Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.953638 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5de970c2-b559-4b8a-86f2-85b07c2292b1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5de970c2-b559-4b8a-86f2-85b07c2292b1\") " pod="openstack/openstack-galera-0" Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.954064 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"5de970c2-b559-4b8a-86f2-85b07c2292b1\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.954205 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5de970c2-b559-4b8a-86f2-85b07c2292b1-kolla-config\") pod \"openstack-galera-0\" (UID: \"5de970c2-b559-4b8a-86f2-85b07c2292b1\") " pod="openstack/openstack-galera-0" Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.954605 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5de970c2-b559-4b8a-86f2-85b07c2292b1-config-data-default\") pod \"openstack-galera-0\" (UID: \"5de970c2-b559-4b8a-86f2-85b07c2292b1\") " pod="openstack/openstack-galera-0" Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.954698 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5de970c2-b559-4b8a-86f2-85b07c2292b1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5de970c2-b559-4b8a-86f2-85b07c2292b1\") " pod="openstack/openstack-galera-0" Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.959920 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5de970c2-b559-4b8a-86f2-85b07c2292b1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5de970c2-b559-4b8a-86f2-85b07c2292b1\") " pod="openstack/openstack-galera-0" Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.960090 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5de970c2-b559-4b8a-86f2-85b07c2292b1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5de970c2-b559-4b8a-86f2-85b07c2292b1\") " pod="openstack/openstack-galera-0" Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.985824 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"5de970c2-b559-4b8a-86f2-85b07c2292b1\") " pod="openstack/openstack-galera-0" Dec 10 12:33:27 crc kubenswrapper[4689]: I1210 12:33:27.993264 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd9km\" (UniqueName: \"kubernetes.io/projected/5de970c2-b559-4b8a-86f2-85b07c2292b1-kube-api-access-dd9km\") pod \"openstack-galera-0\" (UID: \"5de970c2-b559-4b8a-86f2-85b07c2292b1\") " pod="openstack/openstack-galera-0" Dec 10 12:33:28 crc kubenswrapper[4689]: I1210 12:33:28.023832 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.245092 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.250040 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.253956 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.254074 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.254233 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-scw7c" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.257619 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.265156 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.396592 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/49777225-4829-4cb0-bdd3-3e29ee4f0518-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"49777225-4829-4cb0-bdd3-3e29ee4f0518\") " pod="openstack/openstack-cell1-galera-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.396660 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49777225-4829-4cb0-bdd3-3e29ee4f0518-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"49777225-4829-4cb0-bdd3-3e29ee4f0518\") " pod="openstack/openstack-cell1-galera-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.396688 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"49777225-4829-4cb0-bdd3-3e29ee4f0518\") " pod="openstack/openstack-cell1-galera-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.396782 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/49777225-4829-4cb0-bdd3-3e29ee4f0518-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"49777225-4829-4cb0-bdd3-3e29ee4f0518\") " pod="openstack/openstack-cell1-galera-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.396878 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/49777225-4829-4cb0-bdd3-3e29ee4f0518-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"49777225-4829-4cb0-bdd3-3e29ee4f0518\") " pod="openstack/openstack-cell1-galera-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.396905 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7rgx\" (UniqueName: \"kubernetes.io/projected/49777225-4829-4cb0-bdd3-3e29ee4f0518-kube-api-access-m7rgx\") pod \"openstack-cell1-galera-0\" (UID: \"49777225-4829-4cb0-bdd3-3e29ee4f0518\") " pod="openstack/openstack-cell1-galera-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.397031 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/49777225-4829-4cb0-bdd3-3e29ee4f0518-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"49777225-4829-4cb0-bdd3-3e29ee4f0518\") " pod="openstack/openstack-cell1-galera-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.397077 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49777225-4829-4cb0-bdd3-3e29ee4f0518-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"49777225-4829-4cb0-bdd3-3e29ee4f0518\") " pod="openstack/openstack-cell1-galera-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.464157 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.465831 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.468527 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.468683 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-dzmg9" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.468877 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.500769 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.510753 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5d9caa2-f209-4a7f-a0d8-353aa111c264-config-data\") pod \"memcached-0\" (UID: \"e5d9caa2-f209-4a7f-a0d8-353aa111c264\") " pod="openstack/memcached-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.510802 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/49777225-4829-4cb0-bdd3-3e29ee4f0518-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"49777225-4829-4cb0-bdd3-3e29ee4f0518\") " pod="openstack/openstack-cell1-galera-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.510824 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5d9caa2-f209-4a7f-a0d8-353aa111c264-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e5d9caa2-f209-4a7f-a0d8-353aa111c264\") " pod="openstack/memcached-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.510841 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49777225-4829-4cb0-bdd3-3e29ee4f0518-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"49777225-4829-4cb0-bdd3-3e29ee4f0518\") " pod="openstack/openstack-cell1-galera-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.510860 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"49777225-4829-4cb0-bdd3-3e29ee4f0518\") " pod="openstack/openstack-cell1-galera-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.511043 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/49777225-4829-4cb0-bdd3-3e29ee4f0518-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"49777225-4829-4cb0-bdd3-3e29ee4f0518\") " pod="openstack/openstack-cell1-galera-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.511073 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-687fp\" (UniqueName: \"kubernetes.io/projected/e5d9caa2-f209-4a7f-a0d8-353aa111c264-kube-api-access-687fp\") pod \"memcached-0\" (UID: \"e5d9caa2-f209-4a7f-a0d8-353aa111c264\") " pod="openstack/memcached-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.511098 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/49777225-4829-4cb0-bdd3-3e29ee4f0518-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"49777225-4829-4cb0-bdd3-3e29ee4f0518\") " pod="openstack/openstack-cell1-galera-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.511111 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e5d9caa2-f209-4a7f-a0d8-353aa111c264-kolla-config\") pod \"memcached-0\" (UID: \"e5d9caa2-f209-4a7f-a0d8-353aa111c264\") " pod="openstack/memcached-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.511128 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5d9caa2-f209-4a7f-a0d8-353aa111c264-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e5d9caa2-f209-4a7f-a0d8-353aa111c264\") " pod="openstack/memcached-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.511146 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7rgx\" (UniqueName: \"kubernetes.io/projected/49777225-4829-4cb0-bdd3-3e29ee4f0518-kube-api-access-m7rgx\") pod \"openstack-cell1-galera-0\" (UID: \"49777225-4829-4cb0-bdd3-3e29ee4f0518\") " pod="openstack/openstack-cell1-galera-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.511169 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/49777225-4829-4cb0-bdd3-3e29ee4f0518-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"49777225-4829-4cb0-bdd3-3e29ee4f0518\") " pod="openstack/openstack-cell1-galera-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.511189 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49777225-4829-4cb0-bdd3-3e29ee4f0518-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"49777225-4829-4cb0-bdd3-3e29ee4f0518\") " pod="openstack/openstack-cell1-galera-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.512988 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/49777225-4829-4cb0-bdd3-3e29ee4f0518-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"49777225-4829-4cb0-bdd3-3e29ee4f0518\") " pod="openstack/openstack-cell1-galera-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.513294 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/49777225-4829-4cb0-bdd3-3e29ee4f0518-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"49777225-4829-4cb0-bdd3-3e29ee4f0518\") " pod="openstack/openstack-cell1-galera-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.514395 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49777225-4829-4cb0-bdd3-3e29ee4f0518-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"49777225-4829-4cb0-bdd3-3e29ee4f0518\") " pod="openstack/openstack-cell1-galera-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.514564 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"49777225-4829-4cb0-bdd3-3e29ee4f0518\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-cell1-galera-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.514955 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/49777225-4829-4cb0-bdd3-3e29ee4f0518-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"49777225-4829-4cb0-bdd3-3e29ee4f0518\") " pod="openstack/openstack-cell1-galera-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.515129 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/49777225-4829-4cb0-bdd3-3e29ee4f0518-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"49777225-4829-4cb0-bdd3-3e29ee4f0518\") " pod="openstack/openstack-cell1-galera-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.529712 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49777225-4829-4cb0-bdd3-3e29ee4f0518-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"49777225-4829-4cb0-bdd3-3e29ee4f0518\") " pod="openstack/openstack-cell1-galera-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.531061 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7rgx\" (UniqueName: \"kubernetes.io/projected/49777225-4829-4cb0-bdd3-3e29ee4f0518-kube-api-access-m7rgx\") pod \"openstack-cell1-galera-0\" (UID: \"49777225-4829-4cb0-bdd3-3e29ee4f0518\") " pod="openstack/openstack-cell1-galera-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.553119 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"49777225-4829-4cb0-bdd3-3e29ee4f0518\") " pod="openstack/openstack-cell1-galera-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.597372 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.613010 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e5d9caa2-f209-4a7f-a0d8-353aa111c264-kolla-config\") pod \"memcached-0\" (UID: \"e5d9caa2-f209-4a7f-a0d8-353aa111c264\") " pod="openstack/memcached-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.613087 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5d9caa2-f209-4a7f-a0d8-353aa111c264-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e5d9caa2-f209-4a7f-a0d8-353aa111c264\") " pod="openstack/memcached-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.613184 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5d9caa2-f209-4a7f-a0d8-353aa111c264-config-data\") pod \"memcached-0\" (UID: \"e5d9caa2-f209-4a7f-a0d8-353aa111c264\") " pod="openstack/memcached-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.613229 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5d9caa2-f209-4a7f-a0d8-353aa111c264-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e5d9caa2-f209-4a7f-a0d8-353aa111c264\") " pod="openstack/memcached-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.613601 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-687fp\" (UniqueName: \"kubernetes.io/projected/e5d9caa2-f209-4a7f-a0d8-353aa111c264-kube-api-access-687fp\") pod \"memcached-0\" (UID: \"e5d9caa2-f209-4a7f-a0d8-353aa111c264\") " pod="openstack/memcached-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.615173 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e5d9caa2-f209-4a7f-a0d8-353aa111c264-kolla-config\") pod \"memcached-0\" (UID: \"e5d9caa2-f209-4a7f-a0d8-353aa111c264\") " pod="openstack/memcached-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.616455 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5d9caa2-f209-4a7f-a0d8-353aa111c264-config-data\") pod \"memcached-0\" (UID: \"e5d9caa2-f209-4a7f-a0d8-353aa111c264\") " pod="openstack/memcached-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.619575 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5d9caa2-f209-4a7f-a0d8-353aa111c264-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e5d9caa2-f209-4a7f-a0d8-353aa111c264\") " pod="openstack/memcached-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.620886 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5d9caa2-f209-4a7f-a0d8-353aa111c264-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e5d9caa2-f209-4a7f-a0d8-353aa111c264\") " pod="openstack/memcached-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.634172 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-687fp\" (UniqueName: \"kubernetes.io/projected/e5d9caa2-f209-4a7f-a0d8-353aa111c264-kube-api-access-687fp\") pod \"memcached-0\" (UID: \"e5d9caa2-f209-4a7f-a0d8-353aa111c264\") " pod="openstack/memcached-0" Dec 10 12:33:29 crc kubenswrapper[4689]: I1210 12:33:29.780839 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 10 12:33:31 crc kubenswrapper[4689]: I1210 12:33:31.237870 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 12:33:31 crc kubenswrapper[4689]: I1210 12:33:31.239282 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 10 12:33:31 crc kubenswrapper[4689]: I1210 12:33:31.241686 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-zjmxg" Dec 10 12:33:31 crc kubenswrapper[4689]: I1210 12:33:31.255405 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 12:33:31 crc kubenswrapper[4689]: I1210 12:33:31.338251 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnnnq\" (UniqueName: \"kubernetes.io/projected/b45272bc-38b8-4fa2-8710-57da25792b73-kube-api-access-lnnnq\") pod \"kube-state-metrics-0\" (UID: \"b45272bc-38b8-4fa2-8710-57da25792b73\") " pod="openstack/kube-state-metrics-0" Dec 10 12:33:31 crc kubenswrapper[4689]: I1210 12:33:31.439168 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnnnq\" (UniqueName: \"kubernetes.io/projected/b45272bc-38b8-4fa2-8710-57da25792b73-kube-api-access-lnnnq\") pod \"kube-state-metrics-0\" (UID: \"b45272bc-38b8-4fa2-8710-57da25792b73\") " pod="openstack/kube-state-metrics-0" Dec 10 12:33:31 crc kubenswrapper[4689]: I1210 12:33:31.460285 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnnnq\" (UniqueName: \"kubernetes.io/projected/b45272bc-38b8-4fa2-8710-57da25792b73-kube-api-access-lnnnq\") pod \"kube-state-metrics-0\" (UID: \"b45272bc-38b8-4fa2-8710-57da25792b73\") " pod="openstack/kube-state-metrics-0" Dec 10 12:33:31 crc kubenswrapper[4689]: I1210 12:33:31.558795 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 10 12:33:32 crc kubenswrapper[4689]: I1210 12:33:32.247039 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-69fpd"] Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.041461 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-jpztp"] Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.042587 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jpztp" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.044612 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-9s6vx" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.045522 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.045569 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.048289 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-29j5j"] Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.051558 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-29j5j" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.057990 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-29j5j"] Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.070827 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jpztp"] Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.230164 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/58cb894b-f745-4d93-8925-193c6ff871a6-var-log-ovn\") pod \"ovn-controller-jpztp\" (UID: \"58cb894b-f745-4d93-8925-193c6ff871a6\") " pod="openstack/ovn-controller-jpztp" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.230216 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/58cb894b-f745-4d93-8925-193c6ff871a6-var-run\") pod \"ovn-controller-jpztp\" (UID: \"58cb894b-f745-4d93-8925-193c6ff871a6\") " pod="openstack/ovn-controller-jpztp" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.230290 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a1966558-be4f-4607-a746-739845ac6c46-var-log\") pod \"ovn-controller-ovs-29j5j\" (UID: \"a1966558-be4f-4607-a746-739845ac6c46\") " pod="openstack/ovn-controller-ovs-29j5j" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.230318 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp7k5\" (UniqueName: \"kubernetes.io/projected/58cb894b-f745-4d93-8925-193c6ff871a6-kube-api-access-lp7k5\") pod \"ovn-controller-jpztp\" (UID: \"58cb894b-f745-4d93-8925-193c6ff871a6\") " pod="openstack/ovn-controller-jpztp" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.230352 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a1966558-be4f-4607-a746-739845ac6c46-var-run\") pod \"ovn-controller-ovs-29j5j\" (UID: \"a1966558-be4f-4607-a746-739845ac6c46\") " pod="openstack/ovn-controller-ovs-29j5j" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.230398 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58cb894b-f745-4d93-8925-193c6ff871a6-scripts\") pod \"ovn-controller-jpztp\" (UID: \"58cb894b-f745-4d93-8925-193c6ff871a6\") " pod="openstack/ovn-controller-jpztp" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.230477 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b2nd\" (UniqueName: \"kubernetes.io/projected/a1966558-be4f-4607-a746-739845ac6c46-kube-api-access-6b2nd\") pod \"ovn-controller-ovs-29j5j\" (UID: \"a1966558-be4f-4607-a746-739845ac6c46\") " pod="openstack/ovn-controller-ovs-29j5j" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.230667 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a1966558-be4f-4607-a746-739845ac6c46-scripts\") pod \"ovn-controller-ovs-29j5j\" (UID: \"a1966558-be4f-4607-a746-739845ac6c46\") " pod="openstack/ovn-controller-ovs-29j5j" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.230801 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a1966558-be4f-4607-a746-739845ac6c46-var-lib\") pod \"ovn-controller-ovs-29j5j\" (UID: \"a1966558-be4f-4607-a746-739845ac6c46\") " pod="openstack/ovn-controller-ovs-29j5j" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.230879 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a1966558-be4f-4607-a746-739845ac6c46-etc-ovs\") pod \"ovn-controller-ovs-29j5j\" (UID: \"a1966558-be4f-4607-a746-739845ac6c46\") " pod="openstack/ovn-controller-ovs-29j5j" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.230939 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/58cb894b-f745-4d93-8925-193c6ff871a6-var-run-ovn\") pod \"ovn-controller-jpztp\" (UID: \"58cb894b-f745-4d93-8925-193c6ff871a6\") " pod="openstack/ovn-controller-jpztp" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.230984 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58cb894b-f745-4d93-8925-193c6ff871a6-combined-ca-bundle\") pod \"ovn-controller-jpztp\" (UID: \"58cb894b-f745-4d93-8925-193c6ff871a6\") " pod="openstack/ovn-controller-jpztp" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.231022 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/58cb894b-f745-4d93-8925-193c6ff871a6-ovn-controller-tls-certs\") pod \"ovn-controller-jpztp\" (UID: \"58cb894b-f745-4d93-8925-193c6ff871a6\") " pod="openstack/ovn-controller-jpztp" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.332857 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a1966558-be4f-4607-a746-739845ac6c46-var-lib\") pod \"ovn-controller-ovs-29j5j\" (UID: \"a1966558-be4f-4607-a746-739845ac6c46\") " pod="openstack/ovn-controller-ovs-29j5j" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.332924 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a1966558-be4f-4607-a746-739845ac6c46-etc-ovs\") pod \"ovn-controller-ovs-29j5j\" (UID: \"a1966558-be4f-4607-a746-739845ac6c46\") " pod="openstack/ovn-controller-ovs-29j5j" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.332954 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/58cb894b-f745-4d93-8925-193c6ff871a6-var-run-ovn\") pod \"ovn-controller-jpztp\" (UID: \"58cb894b-f745-4d93-8925-193c6ff871a6\") " pod="openstack/ovn-controller-jpztp" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.332993 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58cb894b-f745-4d93-8925-193c6ff871a6-combined-ca-bundle\") pod \"ovn-controller-jpztp\" (UID: \"58cb894b-f745-4d93-8925-193c6ff871a6\") " pod="openstack/ovn-controller-jpztp" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.333019 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/58cb894b-f745-4d93-8925-193c6ff871a6-ovn-controller-tls-certs\") pod \"ovn-controller-jpztp\" (UID: \"58cb894b-f745-4d93-8925-193c6ff871a6\") " pod="openstack/ovn-controller-jpztp" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.333042 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/58cb894b-f745-4d93-8925-193c6ff871a6-var-log-ovn\") pod \"ovn-controller-jpztp\" (UID: \"58cb894b-f745-4d93-8925-193c6ff871a6\") " pod="openstack/ovn-controller-jpztp" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.333063 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/58cb894b-f745-4d93-8925-193c6ff871a6-var-run\") pod \"ovn-controller-jpztp\" (UID: \"58cb894b-f745-4d93-8925-193c6ff871a6\") " pod="openstack/ovn-controller-jpztp" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.333106 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a1966558-be4f-4607-a746-739845ac6c46-var-log\") pod \"ovn-controller-ovs-29j5j\" (UID: \"a1966558-be4f-4607-a746-739845ac6c46\") " pod="openstack/ovn-controller-ovs-29j5j" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.333126 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp7k5\" (UniqueName: \"kubernetes.io/projected/58cb894b-f745-4d93-8925-193c6ff871a6-kube-api-access-lp7k5\") pod \"ovn-controller-jpztp\" (UID: \"58cb894b-f745-4d93-8925-193c6ff871a6\") " pod="openstack/ovn-controller-jpztp" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.333153 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a1966558-be4f-4607-a746-739845ac6c46-var-run\") pod \"ovn-controller-ovs-29j5j\" (UID: \"a1966558-be4f-4607-a746-739845ac6c46\") " pod="openstack/ovn-controller-ovs-29j5j" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.333194 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58cb894b-f745-4d93-8925-193c6ff871a6-scripts\") pod \"ovn-controller-jpztp\" (UID: \"58cb894b-f745-4d93-8925-193c6ff871a6\") " pod="openstack/ovn-controller-jpztp" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.333215 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b2nd\" (UniqueName: \"kubernetes.io/projected/a1966558-be4f-4607-a746-739845ac6c46-kube-api-access-6b2nd\") pod \"ovn-controller-ovs-29j5j\" (UID: \"a1966558-be4f-4607-a746-739845ac6c46\") " pod="openstack/ovn-controller-ovs-29j5j" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.333247 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a1966558-be4f-4607-a746-739845ac6c46-scripts\") pod \"ovn-controller-ovs-29j5j\" (UID: \"a1966558-be4f-4607-a746-739845ac6c46\") " pod="openstack/ovn-controller-ovs-29j5j" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.334128 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a1966558-be4f-4607-a746-739845ac6c46-var-lib\") pod \"ovn-controller-ovs-29j5j\" (UID: \"a1966558-be4f-4607-a746-739845ac6c46\") " pod="openstack/ovn-controller-ovs-29j5j" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.334124 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a1966558-be4f-4607-a746-739845ac6c46-etc-ovs\") pod \"ovn-controller-ovs-29j5j\" (UID: \"a1966558-be4f-4607-a746-739845ac6c46\") " pod="openstack/ovn-controller-ovs-29j5j" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.335415 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a1966558-be4f-4607-a746-739845ac6c46-var-run\") pod \"ovn-controller-ovs-29j5j\" (UID: \"a1966558-be4f-4607-a746-739845ac6c46\") " pod="openstack/ovn-controller-ovs-29j5j" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.335512 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a1966558-be4f-4607-a746-739845ac6c46-scripts\") pod \"ovn-controller-ovs-29j5j\" (UID: \"a1966558-be4f-4607-a746-739845ac6c46\") " pod="openstack/ovn-controller-ovs-29j5j" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.335756 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a1966558-be4f-4607-a746-739845ac6c46-var-log\") pod \"ovn-controller-ovs-29j5j\" (UID: \"a1966558-be4f-4607-a746-739845ac6c46\") " pod="openstack/ovn-controller-ovs-29j5j" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.335624 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/58cb894b-f745-4d93-8925-193c6ff871a6-var-log-ovn\") pod \"ovn-controller-jpztp\" (UID: \"58cb894b-f745-4d93-8925-193c6ff871a6\") " pod="openstack/ovn-controller-jpztp" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.335555 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/58cb894b-f745-4d93-8925-193c6ff871a6-var-run\") pod \"ovn-controller-jpztp\" (UID: \"58cb894b-f745-4d93-8925-193c6ff871a6\") " pod="openstack/ovn-controller-jpztp" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.337178 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/58cb894b-f745-4d93-8925-193c6ff871a6-var-run-ovn\") pod \"ovn-controller-jpztp\" (UID: \"58cb894b-f745-4d93-8925-193c6ff871a6\") " pod="openstack/ovn-controller-jpztp" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.338300 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58cb894b-f745-4d93-8925-193c6ff871a6-scripts\") pod \"ovn-controller-jpztp\" (UID: \"58cb894b-f745-4d93-8925-193c6ff871a6\") " pod="openstack/ovn-controller-jpztp" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.340718 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58cb894b-f745-4d93-8925-193c6ff871a6-combined-ca-bundle\") pod \"ovn-controller-jpztp\" (UID: \"58cb894b-f745-4d93-8925-193c6ff871a6\") " pod="openstack/ovn-controller-jpztp" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.340723 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/58cb894b-f745-4d93-8925-193c6ff871a6-ovn-controller-tls-certs\") pod \"ovn-controller-jpztp\" (UID: \"58cb894b-f745-4d93-8925-193c6ff871a6\") " pod="openstack/ovn-controller-jpztp" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.350116 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp7k5\" (UniqueName: \"kubernetes.io/projected/58cb894b-f745-4d93-8925-193c6ff871a6-kube-api-access-lp7k5\") pod \"ovn-controller-jpztp\" (UID: \"58cb894b-f745-4d93-8925-193c6ff871a6\") " pod="openstack/ovn-controller-jpztp" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.354913 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b2nd\" (UniqueName: \"kubernetes.io/projected/a1966558-be4f-4607-a746-739845ac6c46-kube-api-access-6b2nd\") pod \"ovn-controller-ovs-29j5j\" (UID: \"a1966558-be4f-4607-a746-739845ac6c46\") " pod="openstack/ovn-controller-ovs-29j5j" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.361630 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jpztp" Dec 10 12:33:35 crc kubenswrapper[4689]: I1210 12:33:35.384287 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-29j5j" Dec 10 12:33:36 crc kubenswrapper[4689]: I1210 12:33:36.851459 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 10 12:33:36 crc kubenswrapper[4689]: I1210 12:33:36.854592 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 10 12:33:36 crc kubenswrapper[4689]: I1210 12:33:36.863677 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 10 12:33:36 crc kubenswrapper[4689]: I1210 12:33:36.863726 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 10 12:33:36 crc kubenswrapper[4689]: I1210 12:33:36.863953 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 10 12:33:36 crc kubenswrapper[4689]: I1210 12:33:36.864082 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-d6wg2" Dec 10 12:33:36 crc kubenswrapper[4689]: I1210 12:33:36.863955 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 10 12:33:36 crc kubenswrapper[4689]: I1210 12:33:36.868705 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 10 12:33:36 crc kubenswrapper[4689]: I1210 12:33:36.957304 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c534934-f8a1-4029-8135-b190d180128a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9c534934-f8a1-4029-8135-b190d180128a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 12:33:36 crc kubenswrapper[4689]: I1210 12:33:36.957390 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c534934-f8a1-4029-8135-b190d180128a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"9c534934-f8a1-4029-8135-b190d180128a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 12:33:36 crc kubenswrapper[4689]: I1210 12:33:36.957427 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c534934-f8a1-4029-8135-b190d180128a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9c534934-f8a1-4029-8135-b190d180128a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 12:33:36 crc kubenswrapper[4689]: I1210 12:33:36.957534 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"9c534934-f8a1-4029-8135-b190d180128a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 12:33:36 crc kubenswrapper[4689]: I1210 12:33:36.957613 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9c534934-f8a1-4029-8135-b190d180128a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"9c534934-f8a1-4029-8135-b190d180128a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 12:33:36 crc kubenswrapper[4689]: I1210 12:33:36.957654 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c534934-f8a1-4029-8135-b190d180128a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"9c534934-f8a1-4029-8135-b190d180128a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 12:33:36 crc kubenswrapper[4689]: I1210 12:33:36.957671 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5hcs\" (UniqueName: \"kubernetes.io/projected/9c534934-f8a1-4029-8135-b190d180128a-kube-api-access-m5hcs\") pod \"ovsdbserver-nb-0\" (UID: \"9c534934-f8a1-4029-8135-b190d180128a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 12:33:36 crc kubenswrapper[4689]: I1210 12:33:36.957756 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c534934-f8a1-4029-8135-b190d180128a-config\") pod \"ovsdbserver-nb-0\" (UID: \"9c534934-f8a1-4029-8135-b190d180128a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 12:33:37 crc kubenswrapper[4689]: I1210 12:33:37.059072 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9c534934-f8a1-4029-8135-b190d180128a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"9c534934-f8a1-4029-8135-b190d180128a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 12:33:37 crc kubenswrapper[4689]: I1210 12:33:37.059151 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c534934-f8a1-4029-8135-b190d180128a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"9c534934-f8a1-4029-8135-b190d180128a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 12:33:37 crc kubenswrapper[4689]: I1210 12:33:37.059186 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5hcs\" (UniqueName: \"kubernetes.io/projected/9c534934-f8a1-4029-8135-b190d180128a-kube-api-access-m5hcs\") pod \"ovsdbserver-nb-0\" (UID: \"9c534934-f8a1-4029-8135-b190d180128a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 12:33:37 crc kubenswrapper[4689]: I1210 12:33:37.059253 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c534934-f8a1-4029-8135-b190d180128a-config\") pod \"ovsdbserver-nb-0\" (UID: \"9c534934-f8a1-4029-8135-b190d180128a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 12:33:37 crc kubenswrapper[4689]: I1210 12:33:37.059374 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c534934-f8a1-4029-8135-b190d180128a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9c534934-f8a1-4029-8135-b190d180128a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 12:33:37 crc kubenswrapper[4689]: I1210 12:33:37.059432 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c534934-f8a1-4029-8135-b190d180128a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"9c534934-f8a1-4029-8135-b190d180128a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 12:33:37 crc kubenswrapper[4689]: I1210 12:33:37.059480 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c534934-f8a1-4029-8135-b190d180128a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9c534934-f8a1-4029-8135-b190d180128a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 12:33:37 crc kubenswrapper[4689]: I1210 12:33:37.059549 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"9c534934-f8a1-4029-8135-b190d180128a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 12:33:37 crc kubenswrapper[4689]: I1210 12:33:37.060026 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"9c534934-f8a1-4029-8135-b190d180128a\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-nb-0" Dec 10 12:33:37 crc kubenswrapper[4689]: I1210 12:33:37.060230 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9c534934-f8a1-4029-8135-b190d180128a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"9c534934-f8a1-4029-8135-b190d180128a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 12:33:37 crc kubenswrapper[4689]: I1210 12:33:37.060300 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c534934-f8a1-4029-8135-b190d180128a-config\") pod \"ovsdbserver-nb-0\" (UID: \"9c534934-f8a1-4029-8135-b190d180128a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 12:33:37 crc kubenswrapper[4689]: I1210 12:33:37.060379 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c534934-f8a1-4029-8135-b190d180128a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"9c534934-f8a1-4029-8135-b190d180128a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 12:33:37 crc kubenswrapper[4689]: I1210 12:33:37.064636 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c534934-f8a1-4029-8135-b190d180128a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"9c534934-f8a1-4029-8135-b190d180128a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 12:33:37 crc kubenswrapper[4689]: I1210 12:33:37.064959 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c534934-f8a1-4029-8135-b190d180128a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9c534934-f8a1-4029-8135-b190d180128a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 12:33:37 crc kubenswrapper[4689]: I1210 12:33:37.065514 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c534934-f8a1-4029-8135-b190d180128a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9c534934-f8a1-4029-8135-b190d180128a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 12:33:37 crc kubenswrapper[4689]: I1210 12:33:37.083708 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5hcs\" (UniqueName: \"kubernetes.io/projected/9c534934-f8a1-4029-8135-b190d180128a-kube-api-access-m5hcs\") pod \"ovsdbserver-nb-0\" (UID: \"9c534934-f8a1-4029-8135-b190d180128a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 12:33:37 crc kubenswrapper[4689]: I1210 12:33:37.105104 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"9c534934-f8a1-4029-8135-b190d180128a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 12:33:37 crc kubenswrapper[4689]: I1210 12:33:37.166633 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 12:33:37 crc kubenswrapper[4689]: I1210 12:33:37.166708 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 12:33:37 crc kubenswrapper[4689]: I1210 12:33:37.212458 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.694075 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.696964 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.699035 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-mv644" Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.700374 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.700572 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.700703 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.736314 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.750107 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-69fpd" event={"ID":"7934f7f2-2d11-43a1-8a79-002d383f8c34","Type":"ContainerStarted","Data":"e9db4cfd3079d04c68cd62a76fee585cac44f5d23f04b3688374f9cb471f0ef6"} Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.787965 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c18b8eee-7b2e-494c-b4e4-7896aa80a1ed-config\") pod \"ovsdbserver-sb-0\" (UID: \"c18b8eee-7b2e-494c-b4e4-7896aa80a1ed\") " pod="openstack/ovsdbserver-sb-0" Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.788066 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c18b8eee-7b2e-494c-b4e4-7896aa80a1ed-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c18b8eee-7b2e-494c-b4e4-7896aa80a1ed\") " pod="openstack/ovsdbserver-sb-0" Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.788098 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c18b8eee-7b2e-494c-b4e4-7896aa80a1ed\") " pod="openstack/ovsdbserver-sb-0" Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.788115 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c18b8eee-7b2e-494c-b4e4-7896aa80a1ed-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c18b8eee-7b2e-494c-b4e4-7896aa80a1ed\") " pod="openstack/ovsdbserver-sb-0" Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.788139 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c18b8eee-7b2e-494c-b4e4-7896aa80a1ed-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c18b8eee-7b2e-494c-b4e4-7896aa80a1ed\") " pod="openstack/ovsdbserver-sb-0" Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.788301 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxd2r\" (UniqueName: \"kubernetes.io/projected/c18b8eee-7b2e-494c-b4e4-7896aa80a1ed-kube-api-access-fxd2r\") pod \"ovsdbserver-sb-0\" (UID: \"c18b8eee-7b2e-494c-b4e4-7896aa80a1ed\") " pod="openstack/ovsdbserver-sb-0" Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.788415 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c18b8eee-7b2e-494c-b4e4-7896aa80a1ed-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c18b8eee-7b2e-494c-b4e4-7896aa80a1ed\") " pod="openstack/ovsdbserver-sb-0" Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.788496 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c18b8eee-7b2e-494c-b4e4-7896aa80a1ed-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c18b8eee-7b2e-494c-b4e4-7896aa80a1ed\") " pod="openstack/ovsdbserver-sb-0" Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.890652 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c18b8eee-7b2e-494c-b4e4-7896aa80a1ed-config\") pod \"ovsdbserver-sb-0\" (UID: \"c18b8eee-7b2e-494c-b4e4-7896aa80a1ed\") " pod="openstack/ovsdbserver-sb-0" Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.890798 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c18b8eee-7b2e-494c-b4e4-7896aa80a1ed-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c18b8eee-7b2e-494c-b4e4-7896aa80a1ed\") " pod="openstack/ovsdbserver-sb-0" Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.891043 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c18b8eee-7b2e-494c-b4e4-7896aa80a1ed\") " pod="openstack/ovsdbserver-sb-0" Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.892100 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c18b8eee-7b2e-494c-b4e4-7896aa80a1ed-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c18b8eee-7b2e-494c-b4e4-7896aa80a1ed\") " pod="openstack/ovsdbserver-sb-0" Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.891394 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c18b8eee-7b2e-494c-b4e4-7896aa80a1ed\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-sb-0" Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.892151 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c18b8eee-7b2e-494c-b4e4-7896aa80a1ed-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c18b8eee-7b2e-494c-b4e4-7896aa80a1ed\") " pod="openstack/ovsdbserver-sb-0" Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.892322 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxd2r\" (UniqueName: \"kubernetes.io/projected/c18b8eee-7b2e-494c-b4e4-7896aa80a1ed-kube-api-access-fxd2r\") pod \"ovsdbserver-sb-0\" (UID: \"c18b8eee-7b2e-494c-b4e4-7896aa80a1ed\") " pod="openstack/ovsdbserver-sb-0" Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.892391 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c18b8eee-7b2e-494c-b4e4-7896aa80a1ed-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c18b8eee-7b2e-494c-b4e4-7896aa80a1ed\") " pod="openstack/ovsdbserver-sb-0" Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.892426 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c18b8eee-7b2e-494c-b4e4-7896aa80a1ed-config\") pod \"ovsdbserver-sb-0\" (UID: \"c18b8eee-7b2e-494c-b4e4-7896aa80a1ed\") " pod="openstack/ovsdbserver-sb-0" Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.892469 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c18b8eee-7b2e-494c-b4e4-7896aa80a1ed-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c18b8eee-7b2e-494c-b4e4-7896aa80a1ed\") " pod="openstack/ovsdbserver-sb-0" Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.893272 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c18b8eee-7b2e-494c-b4e4-7896aa80a1ed-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c18b8eee-7b2e-494c-b4e4-7896aa80a1ed\") " pod="openstack/ovsdbserver-sb-0" Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.893366 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c18b8eee-7b2e-494c-b4e4-7896aa80a1ed-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c18b8eee-7b2e-494c-b4e4-7896aa80a1ed\") " pod="openstack/ovsdbserver-sb-0" Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.906811 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c18b8eee-7b2e-494c-b4e4-7896aa80a1ed-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c18b8eee-7b2e-494c-b4e4-7896aa80a1ed\") " pod="openstack/ovsdbserver-sb-0" Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.909885 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c18b8eee-7b2e-494c-b4e4-7896aa80a1ed-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c18b8eee-7b2e-494c-b4e4-7896aa80a1ed\") " pod="openstack/ovsdbserver-sb-0" Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.912833 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxd2r\" (UniqueName: \"kubernetes.io/projected/c18b8eee-7b2e-494c-b4e4-7896aa80a1ed-kube-api-access-fxd2r\") pod \"ovsdbserver-sb-0\" (UID: \"c18b8eee-7b2e-494c-b4e4-7896aa80a1ed\") " pod="openstack/ovsdbserver-sb-0" Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.916247 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c18b8eee-7b2e-494c-b4e4-7896aa80a1ed\") " pod="openstack/ovsdbserver-sb-0" Dec 10 12:33:38 crc kubenswrapper[4689]: I1210 12:33:38.919879 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c18b8eee-7b2e-494c-b4e4-7896aa80a1ed-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c18b8eee-7b2e-494c-b4e4-7896aa80a1ed\") " pod="openstack/ovsdbserver-sb-0" Dec 10 12:33:39 crc kubenswrapper[4689]: I1210 12:33:39.028527 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 10 12:33:39 crc kubenswrapper[4689]: E1210 12:33:39.213069 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 10 12:33:39 crc kubenswrapper[4689]: E1210 12:33:39.213235 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bqnfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-l9v87_openstack(92d945bf-3c25-4fee-8c13-67bba7fb1a74): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 12:33:39 crc kubenswrapper[4689]: E1210 12:33:39.214721 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-l9v87" podUID="92d945bf-3c25-4fee-8c13-67bba7fb1a74" Dec 10 12:33:39 crc kubenswrapper[4689]: E1210 12:33:39.260527 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 10 12:33:39 crc kubenswrapper[4689]: E1210 12:33:39.260765 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7c9w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-9pg5q_openstack(6f9bb41c-0174-4d0b-9d9b-6a815288ac19): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 12:33:39 crc kubenswrapper[4689]: E1210 12:33:39.261931 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-9pg5q" podUID="6f9bb41c-0174-4d0b-9d9b-6a815288ac19" Dec 10 12:33:39 crc kubenswrapper[4689]: I1210 12:33:39.760423 4689 generic.go:334] "Generic (PLEG): container finished" podID="7934f7f2-2d11-43a1-8a79-002d383f8c34" containerID="47ffb9dca172c946792a6b459bc7d8d1499c53fd7b8e537a77c3b24fa8e7adbd" exitCode=0 Dec 10 12:33:39 crc kubenswrapper[4689]: I1210 12:33:39.761209 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-69fpd" event={"ID":"7934f7f2-2d11-43a1-8a79-002d383f8c34","Type":"ContainerDied","Data":"47ffb9dca172c946792a6b459bc7d8d1499c53fd7b8e537a77c3b24fa8e7adbd"} Dec 10 12:33:39 crc kubenswrapper[4689]: I1210 12:33:39.856707 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 12:33:39 crc kubenswrapper[4689]: I1210 12:33:39.865776 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hrjxw"] Dec 10 12:33:39 crc kubenswrapper[4689]: W1210 12:33:39.868258 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb45272bc_38b8_4fa2_8710_57da25792b73.slice/crio-c6f2db6e83ef659d996806762a65b30c10c7db315915a4f75245eb7a40d3af49 WatchSource:0}: Error finding container c6f2db6e83ef659d996806762a65b30c10c7db315915a4f75245eb7a40d3af49: Status 404 returned error can't find the container with id c6f2db6e83ef659d996806762a65b30c10c7db315915a4f75245eb7a40d3af49 Dec 10 12:33:39 crc kubenswrapper[4689]: W1210 12:33:39.884537 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3049c9c_fbe7_4bc9_9fbf_7365b98671be.slice/crio-37b1b5ef19e27fe4f68baf9be69a2c0218bcbeed7b34332c6f67d37b14cf0abe WatchSource:0}: Error finding container 37b1b5ef19e27fe4f68baf9be69a2c0218bcbeed7b34332c6f67d37b14cf0abe: Status 404 returned error can't find the container with id 37b1b5ef19e27fe4f68baf9be69a2c0218bcbeed7b34332c6f67d37b14cf0abe Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.229519 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9pg5q" Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.253419 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-l9v87" Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.291017 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 10 12:33:40 crc kubenswrapper[4689]: W1210 12:33:40.291734 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5d9caa2_f209_4a7f_a0d8_353aa111c264.slice/crio-9dd5fed8ebe7a8e26f7c21e8a209a2bdcef5cc05c39981f46781894eb6ff65fc WatchSource:0}: Error finding container 9dd5fed8ebe7a8e26f7c21e8a209a2bdcef5cc05c39981f46781894eb6ff65fc: Status 404 returned error can't find the container with id 9dd5fed8ebe7a8e26f7c21e8a209a2bdcef5cc05c39981f46781894eb6ff65fc Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.319813 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7c9w\" (UniqueName: \"kubernetes.io/projected/6f9bb41c-0174-4d0b-9d9b-6a815288ac19-kube-api-access-p7c9w\") pod \"6f9bb41c-0174-4d0b-9d9b-6a815288ac19\" (UID: \"6f9bb41c-0174-4d0b-9d9b-6a815288ac19\") " Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.320179 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92d945bf-3c25-4fee-8c13-67bba7fb1a74-config\") pod \"92d945bf-3c25-4fee-8c13-67bba7fb1a74\" (UID: \"92d945bf-3c25-4fee-8c13-67bba7fb1a74\") " Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.320620 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f9bb41c-0174-4d0b-9d9b-6a815288ac19-config\") pod \"6f9bb41c-0174-4d0b-9d9b-6a815288ac19\" (UID: \"6f9bb41c-0174-4d0b-9d9b-6a815288ac19\") " Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.320661 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqnfb\" (UniqueName: \"kubernetes.io/projected/92d945bf-3c25-4fee-8c13-67bba7fb1a74-kube-api-access-bqnfb\") pod \"92d945bf-3c25-4fee-8c13-67bba7fb1a74\" (UID: \"92d945bf-3c25-4fee-8c13-67bba7fb1a74\") " Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.320755 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92d945bf-3c25-4fee-8c13-67bba7fb1a74-dns-svc\") pod \"92d945bf-3c25-4fee-8c13-67bba7fb1a74\" (UID: \"92d945bf-3c25-4fee-8c13-67bba7fb1a74\") " Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.320746 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92d945bf-3c25-4fee-8c13-67bba7fb1a74-config" (OuterVolumeSpecName: "config") pod "92d945bf-3c25-4fee-8c13-67bba7fb1a74" (UID: "92d945bf-3c25-4fee-8c13-67bba7fb1a74"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.321006 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f9bb41c-0174-4d0b-9d9b-6a815288ac19-config" (OuterVolumeSpecName: "config") pod "6f9bb41c-0174-4d0b-9d9b-6a815288ac19" (UID: "6f9bb41c-0174-4d0b-9d9b-6a815288ac19"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.321207 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92d945bf-3c25-4fee-8c13-67bba7fb1a74-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "92d945bf-3c25-4fee-8c13-67bba7fb1a74" (UID: "92d945bf-3c25-4fee-8c13-67bba7fb1a74"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.323842 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92d945bf-3c25-4fee-8c13-67bba7fb1a74-kube-api-access-bqnfb" (OuterVolumeSpecName: "kube-api-access-bqnfb") pod "92d945bf-3c25-4fee-8c13-67bba7fb1a74" (UID: "92d945bf-3c25-4fee-8c13-67bba7fb1a74"). InnerVolumeSpecName "kube-api-access-bqnfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.324182 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f9bb41c-0174-4d0b-9d9b-6a815288ac19-kube-api-access-p7c9w" (OuterVolumeSpecName: "kube-api-access-p7c9w") pod "6f9bb41c-0174-4d0b-9d9b-6a815288ac19" (UID: "6f9bb41c-0174-4d0b-9d9b-6a815288ac19"). InnerVolumeSpecName "kube-api-access-p7c9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.422999 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f9bb41c-0174-4d0b-9d9b-6a815288ac19-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.423033 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqnfb\" (UniqueName: \"kubernetes.io/projected/92d945bf-3c25-4fee-8c13-67bba7fb1a74-kube-api-access-bqnfb\") on node \"crc\" DevicePath \"\"" Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.423049 4689 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92d945bf-3c25-4fee-8c13-67bba7fb1a74-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.423057 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7c9w\" (UniqueName: \"kubernetes.io/projected/6f9bb41c-0174-4d0b-9d9b-6a815288ac19-kube-api-access-p7c9w\") on node \"crc\" DevicePath \"\"" Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.423066 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92d945bf-3c25-4fee-8c13-67bba7fb1a74-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.521300 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.521338 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.526963 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.543031 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.572480 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.585271 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jpztp"] Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.619960 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.705693 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-29j5j"] Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.767647 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"9c534934-f8a1-4029-8135-b190d180128a","Type":"ContainerStarted","Data":"0abb11b328055e622db987612f5b26dc2c775223e426c93c121fec1132d5b213"} Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.768900 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5de970c2-b559-4b8a-86f2-85b07c2292b1","Type":"ContainerStarted","Data":"1b70fddaaab09d870fc1a9fb61586661a8898dabb7d7f01f474f301159779565"} Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.770000 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"33bee83d-eb0f-4e5e-9617-f8102008436a","Type":"ContainerStarted","Data":"90c8adb886c447912708a6eb929c0ef39c9739a37adcdd927d9f9efa1b9f665b"} Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.772073 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-69fpd" event={"ID":"7934f7f2-2d11-43a1-8a79-002d383f8c34","Type":"ContainerStarted","Data":"c3994f58b65480670016581afb3d2142a0ab48047fb7a63af7eb2ad7c643d74c"} Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.772242 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-69fpd" Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.774080 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"e5d9caa2-f209-4a7f-a0d8-353aa111c264","Type":"ContainerStarted","Data":"9dd5fed8ebe7a8e26f7c21e8a209a2bdcef5cc05c39981f46781894eb6ff65fc"} Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.775806 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c18b8eee-7b2e-494c-b4e4-7896aa80a1ed","Type":"ContainerStarted","Data":"4837352b1af6f575a1d9687a29b1b827dd7c549622b2abc52701dd9090a17820"} Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.782711 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-9pg5q" event={"ID":"6f9bb41c-0174-4d0b-9d9b-6a815288ac19","Type":"ContainerDied","Data":"4314911cb1e3c2f0bc32ff3dfd01c01b04c3cd1abf678d852b9e1d21b37cd00e"} Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.782745 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9pg5q" Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.788114 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jpztp" event={"ID":"58cb894b-f745-4d93-8925-193c6ff871a6","Type":"ContainerStarted","Data":"7d16db71d3a1c315bf699ed24718807d8d46f0d6be383537a70e097890c5e3da"} Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.790234 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"49777225-4829-4cb0-bdd3-3e29ee4f0518","Type":"ContainerStarted","Data":"17f8d5a808824f66add6f1e7a1c562c2c6677e4efdcb1f3e5ae8d358e3d210c9"} Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.791057 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-l9v87" event={"ID":"92d945bf-3c25-4fee-8c13-67bba7fb1a74","Type":"ContainerDied","Data":"7262ed3f3be90a6aba9c4154003c560e53d3e34f63b0d1283023c6a06a5b6092"} Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.791085 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-l9v87" Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.794604 4689 generic.go:334] "Generic (PLEG): container finished" podID="b3049c9c-fbe7-4bc9-9fbf-7365b98671be" containerID="e81772b402a692b6536f19e2b2babc0db41511a0b096420045da4fbe721bb453" exitCode=0 Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.794657 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-hrjxw" event={"ID":"b3049c9c-fbe7-4bc9-9fbf-7365b98671be","Type":"ContainerDied","Data":"e81772b402a692b6536f19e2b2babc0db41511a0b096420045da4fbe721bb453"} Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.794677 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-hrjxw" event={"ID":"b3049c9c-fbe7-4bc9-9fbf-7365b98671be","Type":"ContainerStarted","Data":"37b1b5ef19e27fe4f68baf9be69a2c0218bcbeed7b34332c6f67d37b14cf0abe"} Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.794682 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-69fpd" podStartSLOduration=14.745002327 podStartE2EDuration="15.79466209s" podCreationTimestamp="2025-12-10 12:33:25 +0000 UTC" firstStartedPulling="2025-12-10 12:33:38.373462061 +0000 UTC m=+1086.161543239" lastFinishedPulling="2025-12-10 12:33:39.423121864 +0000 UTC m=+1087.211203002" observedRunningTime="2025-12-10 12:33:40.786858676 +0000 UTC m=+1088.574939854" watchObservedRunningTime="2025-12-10 12:33:40.79466209 +0000 UTC m=+1088.582743228" Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.795681 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b45272bc-38b8-4fa2-8710-57da25792b73","Type":"ContainerStarted","Data":"c6f2db6e83ef659d996806762a65b30c10c7db315915a4f75245eb7a40d3af49"} Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.798304 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c","Type":"ContainerStarted","Data":"7a94aba0a4e241ca026b2b9d965052c8f875b3827a7dae6a6ae527e6c97cba34"} Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.836101 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9pg5q"] Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.843879 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9pg5q"] Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.863266 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-l9v87"] Dec 10 12:33:40 crc kubenswrapper[4689]: I1210 12:33:40.870200 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-l9v87"] Dec 10 12:33:41 crc kubenswrapper[4689]: I1210 12:33:41.806689 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-29j5j" event={"ID":"a1966558-be4f-4607-a746-739845ac6c46","Type":"ContainerStarted","Data":"f05cd1c1ed5d0aae2a156a156665a79bd39e61114efcb92725031503e3ebc024"} Dec 10 12:33:42 crc kubenswrapper[4689]: I1210 12:33:42.514271 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f9bb41c-0174-4d0b-9d9b-6a815288ac19" path="/var/lib/kubelet/pods/6f9bb41c-0174-4d0b-9d9b-6a815288ac19/volumes" Dec 10 12:33:42 crc kubenswrapper[4689]: I1210 12:33:42.515095 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92d945bf-3c25-4fee-8c13-67bba7fb1a74" path="/var/lib/kubelet/pods/92d945bf-3c25-4fee-8c13-67bba7fb1a74/volumes" Dec 10 12:33:45 crc kubenswrapper[4689]: I1210 12:33:45.397138 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-69fpd" Dec 10 12:33:45 crc kubenswrapper[4689]: I1210 12:33:45.442644 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hrjxw"] Dec 10 12:33:48 crc kubenswrapper[4689]: I1210 12:33:48.869000 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"49777225-4829-4cb0-bdd3-3e29ee4f0518","Type":"ContainerStarted","Data":"1c444796ed36ae3e2f5885e5b89dc4b2e916374543f3c2e3a111fcc63b4137c5"} Dec 10 12:33:48 crc kubenswrapper[4689]: I1210 12:33:48.871995 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-hrjxw" podUID="b3049c9c-fbe7-4bc9-9fbf-7365b98671be" containerName="dnsmasq-dns" containerID="cri-o://aff3d932bfd2a0131c23ff364b6d0bf6bbca178033cd6724b96b5b501d41bc5f" gracePeriod=10 Dec 10 12:33:48 crc kubenswrapper[4689]: I1210 12:33:48.872002 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-hrjxw" event={"ID":"b3049c9c-fbe7-4bc9-9fbf-7365b98671be","Type":"ContainerStarted","Data":"aff3d932bfd2a0131c23ff364b6d0bf6bbca178033cd6724b96b5b501d41bc5f"} Dec 10 12:33:48 crc kubenswrapper[4689]: I1210 12:33:48.872132 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-hrjxw" Dec 10 12:33:48 crc kubenswrapper[4689]: I1210 12:33:48.874683 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5de970c2-b559-4b8a-86f2-85b07c2292b1","Type":"ContainerStarted","Data":"623723edceced098ef9cd35003b98e4716a6a43bf611b5d805d8168ace7ac205"} Dec 10 12:33:48 crc kubenswrapper[4689]: I1210 12:33:48.940251 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-hrjxw" podStartSLOduration=24.940232821 podStartE2EDuration="24.940232821s" podCreationTimestamp="2025-12-10 12:33:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:33:48.93898403 +0000 UTC m=+1096.727065168" watchObservedRunningTime="2025-12-10 12:33:48.940232821 +0000 UTC m=+1096.728313959" Dec 10 12:33:49 crc kubenswrapper[4689]: I1210 12:33:49.243395 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-hrjxw" Dec 10 12:33:49 crc kubenswrapper[4689]: I1210 12:33:49.275164 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vl6l\" (UniqueName: \"kubernetes.io/projected/b3049c9c-fbe7-4bc9-9fbf-7365b98671be-kube-api-access-4vl6l\") pod \"b3049c9c-fbe7-4bc9-9fbf-7365b98671be\" (UID: \"b3049c9c-fbe7-4bc9-9fbf-7365b98671be\") " Dec 10 12:33:49 crc kubenswrapper[4689]: I1210 12:33:49.275348 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3049c9c-fbe7-4bc9-9fbf-7365b98671be-config\") pod \"b3049c9c-fbe7-4bc9-9fbf-7365b98671be\" (UID: \"b3049c9c-fbe7-4bc9-9fbf-7365b98671be\") " Dec 10 12:33:49 crc kubenswrapper[4689]: I1210 12:33:49.276223 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3049c9c-fbe7-4bc9-9fbf-7365b98671be-dns-svc\") pod \"b3049c9c-fbe7-4bc9-9fbf-7365b98671be\" (UID: \"b3049c9c-fbe7-4bc9-9fbf-7365b98671be\") " Dec 10 12:33:49 crc kubenswrapper[4689]: I1210 12:33:49.289375 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3049c9c-fbe7-4bc9-9fbf-7365b98671be-kube-api-access-4vl6l" (OuterVolumeSpecName: "kube-api-access-4vl6l") pod "b3049c9c-fbe7-4bc9-9fbf-7365b98671be" (UID: "b3049c9c-fbe7-4bc9-9fbf-7365b98671be"). InnerVolumeSpecName "kube-api-access-4vl6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:33:49 crc kubenswrapper[4689]: I1210 12:33:49.355087 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3049c9c-fbe7-4bc9-9fbf-7365b98671be-config" (OuterVolumeSpecName: "config") pod "b3049c9c-fbe7-4bc9-9fbf-7365b98671be" (UID: "b3049c9c-fbe7-4bc9-9fbf-7365b98671be"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:33:49 crc kubenswrapper[4689]: I1210 12:33:49.355587 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3049c9c-fbe7-4bc9-9fbf-7365b98671be-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b3049c9c-fbe7-4bc9-9fbf-7365b98671be" (UID: "b3049c9c-fbe7-4bc9-9fbf-7365b98671be"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:33:49 crc kubenswrapper[4689]: I1210 12:33:49.378468 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vl6l\" (UniqueName: \"kubernetes.io/projected/b3049c9c-fbe7-4bc9-9fbf-7365b98671be-kube-api-access-4vl6l\") on node \"crc\" DevicePath \"\"" Dec 10 12:33:49 crc kubenswrapper[4689]: I1210 12:33:49.378510 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3049c9c-fbe7-4bc9-9fbf-7365b98671be-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:33:49 crc kubenswrapper[4689]: I1210 12:33:49.378526 4689 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3049c9c-fbe7-4bc9-9fbf-7365b98671be-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 12:33:49 crc kubenswrapper[4689]: I1210 12:33:49.881730 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"9c534934-f8a1-4029-8135-b190d180128a","Type":"ContainerStarted","Data":"4892b5d1970e19ffa86976670746cb56978463caa47487ed059180fc8f55bf07"} Dec 10 12:33:49 crc kubenswrapper[4689]: I1210 12:33:49.883238 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jpztp" event={"ID":"58cb894b-f745-4d93-8925-193c6ff871a6","Type":"ContainerStarted","Data":"b0e3fb86148bb00100a5a85ca73760487155894b878cde974b675456605c017d"} Dec 10 12:33:49 crc kubenswrapper[4689]: I1210 12:33:49.883401 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-jpztp" Dec 10 12:33:49 crc kubenswrapper[4689]: I1210 12:33:49.913523 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-29j5j" event={"ID":"a1966558-be4f-4607-a746-739845ac6c46","Type":"ContainerStarted","Data":"6a9a97af4f9a39a67339df0d38ad2e1c005c377b35223d0d88534636de9f015f"} Dec 10 12:33:49 crc kubenswrapper[4689]: I1210 12:33:49.918068 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-jpztp" podStartSLOduration=7.225132872 podStartE2EDuration="14.918039765s" podCreationTimestamp="2025-12-10 12:33:35 +0000 UTC" firstStartedPulling="2025-12-10 12:33:40.66533969 +0000 UTC m=+1088.453420828" lastFinishedPulling="2025-12-10 12:33:48.358246563 +0000 UTC m=+1096.146327721" observedRunningTime="2025-12-10 12:33:49.908013486 +0000 UTC m=+1097.696094634" watchObservedRunningTime="2025-12-10 12:33:49.918039765 +0000 UTC m=+1097.706120913" Dec 10 12:33:49 crc kubenswrapper[4689]: I1210 12:33:49.926102 4689 generic.go:334] "Generic (PLEG): container finished" podID="b3049c9c-fbe7-4bc9-9fbf-7365b98671be" containerID="aff3d932bfd2a0131c23ff364b6d0bf6bbca178033cd6724b96b5b501d41bc5f" exitCode=0 Dec 10 12:33:49 crc kubenswrapper[4689]: I1210 12:33:49.926258 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-hrjxw" Dec 10 12:33:49 crc kubenswrapper[4689]: I1210 12:33:49.926706 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-hrjxw" event={"ID":"b3049c9c-fbe7-4bc9-9fbf-7365b98671be","Type":"ContainerDied","Data":"aff3d932bfd2a0131c23ff364b6d0bf6bbca178033cd6724b96b5b501d41bc5f"} Dec 10 12:33:49 crc kubenswrapper[4689]: I1210 12:33:49.926758 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-hrjxw" event={"ID":"b3049c9c-fbe7-4bc9-9fbf-7365b98671be","Type":"ContainerDied","Data":"37b1b5ef19e27fe4f68baf9be69a2c0218bcbeed7b34332c6f67d37b14cf0abe"} Dec 10 12:33:49 crc kubenswrapper[4689]: I1210 12:33:49.926776 4689 scope.go:117] "RemoveContainer" containerID="aff3d932bfd2a0131c23ff364b6d0bf6bbca178033cd6724b96b5b501d41bc5f" Dec 10 12:33:49 crc kubenswrapper[4689]: I1210 12:33:49.928347 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b45272bc-38b8-4fa2-8710-57da25792b73","Type":"ContainerStarted","Data":"dafae15abcc69d9d05c30417a2fc9025c74b56325238c67067b9e388e2efa6a9"} Dec 10 12:33:49 crc kubenswrapper[4689]: I1210 12:33:49.928719 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 10 12:33:49 crc kubenswrapper[4689]: I1210 12:33:49.933280 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"e5d9caa2-f209-4a7f-a0d8-353aa111c264","Type":"ContainerStarted","Data":"559c9ab0de1f92aad74453567ec761b58106ccfec0d02e0af5204d8e559534fc"} Dec 10 12:33:49 crc kubenswrapper[4689]: I1210 12:33:49.933397 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 10 12:33:49 crc kubenswrapper[4689]: I1210 12:33:49.941564 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c18b8eee-7b2e-494c-b4e4-7896aa80a1ed","Type":"ContainerStarted","Data":"20ed1d4ca6b4f1681c13cd23ca9aa9d6c0280529cd425d912fe90055e48a684e"} Dec 10 12:33:49 crc kubenswrapper[4689]: I1210 12:33:49.965640 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=10.486835272 podStartE2EDuration="18.96562115s" podCreationTimestamp="2025-12-10 12:33:31 +0000 UTC" firstStartedPulling="2025-12-10 12:33:39.871557988 +0000 UTC m=+1087.659639126" lastFinishedPulling="2025-12-10 12:33:48.350343846 +0000 UTC m=+1096.138425004" observedRunningTime="2025-12-10 12:33:49.95401557 +0000 UTC m=+1097.742096708" watchObservedRunningTime="2025-12-10 12:33:49.96562115 +0000 UTC m=+1097.753702278" Dec 10 12:33:49 crc kubenswrapper[4689]: I1210 12:33:49.965758 4689 scope.go:117] "RemoveContainer" containerID="e81772b402a692b6536f19e2b2babc0db41511a0b096420045da4fbe721bb453" Dec 10 12:33:49 crc kubenswrapper[4689]: I1210 12:33:49.981809 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=12.98683935 podStartE2EDuration="20.981788372s" podCreationTimestamp="2025-12-10 12:33:29 +0000 UTC" firstStartedPulling="2025-12-10 12:33:40.297219225 +0000 UTC m=+1088.085300363" lastFinishedPulling="2025-12-10 12:33:48.292168247 +0000 UTC m=+1096.080249385" observedRunningTime="2025-12-10 12:33:49.974390988 +0000 UTC m=+1097.762472136" watchObservedRunningTime="2025-12-10 12:33:49.981788372 +0000 UTC m=+1097.769869500" Dec 10 12:33:49 crc kubenswrapper[4689]: I1210 12:33:49.997524 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hrjxw"] Dec 10 12:33:50 crc kubenswrapper[4689]: I1210 12:33:50.001705 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hrjxw"] Dec 10 12:33:50 crc kubenswrapper[4689]: I1210 12:33:50.002895 4689 scope.go:117] "RemoveContainer" containerID="aff3d932bfd2a0131c23ff364b6d0bf6bbca178033cd6724b96b5b501d41bc5f" Dec 10 12:33:50 crc kubenswrapper[4689]: E1210 12:33:50.004188 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aff3d932bfd2a0131c23ff364b6d0bf6bbca178033cd6724b96b5b501d41bc5f\": container with ID starting with aff3d932bfd2a0131c23ff364b6d0bf6bbca178033cd6724b96b5b501d41bc5f not found: ID does not exist" containerID="aff3d932bfd2a0131c23ff364b6d0bf6bbca178033cd6724b96b5b501d41bc5f" Dec 10 12:33:50 crc kubenswrapper[4689]: I1210 12:33:50.004232 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aff3d932bfd2a0131c23ff364b6d0bf6bbca178033cd6724b96b5b501d41bc5f"} err="failed to get container status \"aff3d932bfd2a0131c23ff364b6d0bf6bbca178033cd6724b96b5b501d41bc5f\": rpc error: code = NotFound desc = could not find container \"aff3d932bfd2a0131c23ff364b6d0bf6bbca178033cd6724b96b5b501d41bc5f\": container with ID starting with aff3d932bfd2a0131c23ff364b6d0bf6bbca178033cd6724b96b5b501d41bc5f not found: ID does not exist" Dec 10 12:33:50 crc kubenswrapper[4689]: I1210 12:33:50.004260 4689 scope.go:117] "RemoveContainer" containerID="e81772b402a692b6536f19e2b2babc0db41511a0b096420045da4fbe721bb453" Dec 10 12:33:50 crc kubenswrapper[4689]: E1210 12:33:50.005861 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e81772b402a692b6536f19e2b2babc0db41511a0b096420045da4fbe721bb453\": container with ID starting with e81772b402a692b6536f19e2b2babc0db41511a0b096420045da4fbe721bb453 not found: ID does not exist" containerID="e81772b402a692b6536f19e2b2babc0db41511a0b096420045da4fbe721bb453" Dec 10 12:33:50 crc kubenswrapper[4689]: I1210 12:33:50.005898 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e81772b402a692b6536f19e2b2babc0db41511a0b096420045da4fbe721bb453"} err="failed to get container status \"e81772b402a692b6536f19e2b2babc0db41511a0b096420045da4fbe721bb453\": rpc error: code = NotFound desc = could not find container \"e81772b402a692b6536f19e2b2babc0db41511a0b096420045da4fbe721bb453\": container with ID starting with e81772b402a692b6536f19e2b2babc0db41511a0b096420045da4fbe721bb453 not found: ID does not exist" Dec 10 12:33:50 crc kubenswrapper[4689]: I1210 12:33:50.507674 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3049c9c-fbe7-4bc9-9fbf-7365b98671be" path="/var/lib/kubelet/pods/b3049c9c-fbe7-4bc9-9fbf-7365b98671be/volumes" Dec 10 12:33:50 crc kubenswrapper[4689]: I1210 12:33:50.960202 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"33bee83d-eb0f-4e5e-9617-f8102008436a","Type":"ContainerStarted","Data":"40884be95475317286239419fb7c4d5aacd4a03014eda7b158d3dc2103efb907"} Dec 10 12:33:50 crc kubenswrapper[4689]: I1210 12:33:50.965614 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c","Type":"ContainerStarted","Data":"1a805d013077dbeecc8200f54ac11b45f3ce841f3bdb61918011111fe65869ee"} Dec 10 12:33:50 crc kubenswrapper[4689]: I1210 12:33:50.969368 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-29j5j" event={"ID":"a1966558-be4f-4607-a746-739845ac6c46","Type":"ContainerDied","Data":"6a9a97af4f9a39a67339df0d38ad2e1c005c377b35223d0d88534636de9f015f"} Dec 10 12:33:50 crc kubenswrapper[4689]: I1210 12:33:50.969224 4689 generic.go:334] "Generic (PLEG): container finished" podID="a1966558-be4f-4607-a746-739845ac6c46" containerID="6a9a97af4f9a39a67339df0d38ad2e1c005c377b35223d0d88534636de9f015f" exitCode=0 Dec 10 12:33:51 crc kubenswrapper[4689]: I1210 12:33:51.981070 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-29j5j" event={"ID":"a1966558-be4f-4607-a746-739845ac6c46","Type":"ContainerStarted","Data":"83d5c86cb3979eda775dc3fe356d828ec3169646f50677fbc98d397b0d608350"} Dec 10 12:33:52 crc kubenswrapper[4689]: I1210 12:33:52.991204 4689 generic.go:334] "Generic (PLEG): container finished" podID="5de970c2-b559-4b8a-86f2-85b07c2292b1" containerID="623723edceced098ef9cd35003b98e4716a6a43bf611b5d805d8168ace7ac205" exitCode=0 Dec 10 12:33:52 crc kubenswrapper[4689]: I1210 12:33:52.991272 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5de970c2-b559-4b8a-86f2-85b07c2292b1","Type":"ContainerDied","Data":"623723edceced098ef9cd35003b98e4716a6a43bf611b5d805d8168ace7ac205"} Dec 10 12:33:52 crc kubenswrapper[4689]: I1210 12:33:52.995108 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c18b8eee-7b2e-494c-b4e4-7896aa80a1ed","Type":"ContainerStarted","Data":"20ed4fa9edfc06422a569f620ab617c681cbca5ab27661c1cd36754e9e5dcdbf"} Dec 10 12:33:52 crc kubenswrapper[4689]: I1210 12:33:52.997869 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"9c534934-f8a1-4029-8135-b190d180128a","Type":"ContainerStarted","Data":"7440c95c53902980387a103a8d3c170494d4237b5a17b4b61be64d8c04228a0b"} Dec 10 12:33:53 crc kubenswrapper[4689]: I1210 12:33:53.001941 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-29j5j" event={"ID":"a1966558-be4f-4607-a746-739845ac6c46","Type":"ContainerStarted","Data":"4121d7d85fc9839d2ace88597029e4022bb6756170568b9aa05e78802c2b7759"} Dec 10 12:33:53 crc kubenswrapper[4689]: I1210 12:33:53.002007 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-29j5j" Dec 10 12:33:53 crc kubenswrapper[4689]: I1210 12:33:53.002036 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-29j5j" Dec 10 12:33:53 crc kubenswrapper[4689]: I1210 12:33:53.003698 4689 generic.go:334] "Generic (PLEG): container finished" podID="49777225-4829-4cb0-bdd3-3e29ee4f0518" containerID="1c444796ed36ae3e2f5885e5b89dc4b2e916374543f3c2e3a111fcc63b4137c5" exitCode=0 Dec 10 12:33:53 crc kubenswrapper[4689]: I1210 12:33:53.003735 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"49777225-4829-4cb0-bdd3-3e29ee4f0518","Type":"ContainerDied","Data":"1c444796ed36ae3e2f5885e5b89dc4b2e916374543f3c2e3a111fcc63b4137c5"} Dec 10 12:33:53 crc kubenswrapper[4689]: I1210 12:33:53.053503 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-29j5j" podStartSLOduration=10.682626089 podStartE2EDuration="18.053348771s" podCreationTimestamp="2025-12-10 12:33:35 +0000 UTC" firstStartedPulling="2025-12-10 12:33:40.948379336 +0000 UTC m=+1088.736460474" lastFinishedPulling="2025-12-10 12:33:48.319102008 +0000 UTC m=+1096.107183156" observedRunningTime="2025-12-10 12:33:53.043894376 +0000 UTC m=+1100.831975574" watchObservedRunningTime="2025-12-10 12:33:53.053348771 +0000 UTC m=+1100.841429919" Dec 10 12:33:53 crc kubenswrapper[4689]: I1210 12:33:53.096378 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=6.218859521 podStartE2EDuration="18.096356792s" podCreationTimestamp="2025-12-10 12:33:35 +0000 UTC" firstStartedPulling="2025-12-10 12:33:40.573077003 +0000 UTC m=+1088.361158131" lastFinishedPulling="2025-12-10 12:33:52.450574264 +0000 UTC m=+1100.238655402" observedRunningTime="2025-12-10 12:33:53.076702693 +0000 UTC m=+1100.864783841" watchObservedRunningTime="2025-12-10 12:33:53.096356792 +0000 UTC m=+1100.884437940" Dec 10 12:33:53 crc kubenswrapper[4689]: I1210 12:33:53.122141 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=4.329116695 podStartE2EDuration="16.122117003s" podCreationTimestamp="2025-12-10 12:33:37 +0000 UTC" firstStartedPulling="2025-12-10 12:33:40.66490885 +0000 UTC m=+1088.452989988" lastFinishedPulling="2025-12-10 12:33:52.457909158 +0000 UTC m=+1100.245990296" observedRunningTime="2025-12-10 12:33:53.115286363 +0000 UTC m=+1100.903367521" watchObservedRunningTime="2025-12-10 12:33:53.122117003 +0000 UTC m=+1100.910198171" Dec 10 12:33:54 crc kubenswrapper[4689]: I1210 12:33:54.015329 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"49777225-4829-4cb0-bdd3-3e29ee4f0518","Type":"ContainerStarted","Data":"859446e7e103708491f72d7fe5ff9360ede56eb3556462556c22d3680a4461ca"} Dec 10 12:33:54 crc kubenswrapper[4689]: I1210 12:33:54.021775 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5de970c2-b559-4b8a-86f2-85b07c2292b1","Type":"ContainerStarted","Data":"0fe5a0b3e9aad601eb48fe9f5d5c1d769edb3558cc148a41a2d468af62b5d33b"} Dec 10 12:33:54 crc kubenswrapper[4689]: I1210 12:33:54.028950 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 10 12:33:54 crc kubenswrapper[4689]: I1210 12:33:54.029139 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 10 12:33:54 crc kubenswrapper[4689]: I1210 12:33:54.061270 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=18.339130404 podStartE2EDuration="26.061246624s" podCreationTimestamp="2025-12-10 12:33:28 +0000 UTC" firstStartedPulling="2025-12-10 12:33:40.571533635 +0000 UTC m=+1088.359614773" lastFinishedPulling="2025-12-10 12:33:48.293649855 +0000 UTC m=+1096.081730993" observedRunningTime="2025-12-10 12:33:54.047476661 +0000 UTC m=+1101.835557809" watchObservedRunningTime="2025-12-10 12:33:54.061246624 +0000 UTC m=+1101.849327782" Dec 10 12:33:54 crc kubenswrapper[4689]: I1210 12:33:54.084845 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 10 12:33:54 crc kubenswrapper[4689]: I1210 12:33:54.086670 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=20.401972219 podStartE2EDuration="28.086635346s" podCreationTimestamp="2025-12-10 12:33:26 +0000 UTC" firstStartedPulling="2025-12-10 12:33:40.665660348 +0000 UTC m=+1088.453741486" lastFinishedPulling="2025-12-10 12:33:48.350323475 +0000 UTC m=+1096.138404613" observedRunningTime="2025-12-10 12:33:54.069209941 +0000 UTC m=+1101.857291089" watchObservedRunningTime="2025-12-10 12:33:54.086635346 +0000 UTC m=+1101.874716514" Dec 10 12:33:54 crc kubenswrapper[4689]: I1210 12:33:54.782129 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.075447 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.212822 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.278307 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.318771 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-p25l6"] Dec 10 12:33:55 crc kubenswrapper[4689]: E1210 12:33:55.319121 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3049c9c-fbe7-4bc9-9fbf-7365b98671be" containerName="dnsmasq-dns" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.319143 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3049c9c-fbe7-4bc9-9fbf-7365b98671be" containerName="dnsmasq-dns" Dec 10 12:33:55 crc kubenswrapper[4689]: E1210 12:33:55.319162 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3049c9c-fbe7-4bc9-9fbf-7365b98671be" containerName="init" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.319170 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3049c9c-fbe7-4bc9-9fbf-7365b98671be" containerName="init" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.319335 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3049c9c-fbe7-4bc9-9fbf-7365b98671be" containerName="dnsmasq-dns" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.320184 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-p25l6" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.325600 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.333488 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-p25l6"] Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.376143 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-q4rdb"] Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.377410 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-q4rdb" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.380719 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.386192 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-q4rdb"] Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.387726 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fac27dde-5617-46e0-a085-0c9b7e97237e-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-p25l6\" (UID: \"fac27dde-5617-46e0-a085-0c9b7e97237e\") " pod="openstack/dnsmasq-dns-7f896c8c65-p25l6" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.387831 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c24x\" (UniqueName: \"kubernetes.io/projected/fac27dde-5617-46e0-a085-0c9b7e97237e-kube-api-access-6c24x\") pod \"dnsmasq-dns-7f896c8c65-p25l6\" (UID: \"fac27dde-5617-46e0-a085-0c9b7e97237e\") " pod="openstack/dnsmasq-dns-7f896c8c65-p25l6" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.388006 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fac27dde-5617-46e0-a085-0c9b7e97237e-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-p25l6\" (UID: \"fac27dde-5617-46e0-a085-0c9b7e97237e\") " pod="openstack/dnsmasq-dns-7f896c8c65-p25l6" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.388056 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fac27dde-5617-46e0-a085-0c9b7e97237e-config\") pod \"dnsmasq-dns-7f896c8c65-p25l6\" (UID: \"fac27dde-5617-46e0-a085-0c9b7e97237e\") " pod="openstack/dnsmasq-dns-7f896c8c65-p25l6" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.489279 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpknj\" (UniqueName: \"kubernetes.io/projected/ed1dfb51-596d-4779-9d07-37b566b30adf-kube-api-access-xpknj\") pod \"ovn-controller-metrics-q4rdb\" (UID: \"ed1dfb51-596d-4779-9d07-37b566b30adf\") " pod="openstack/ovn-controller-metrics-q4rdb" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.489356 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed1dfb51-596d-4779-9d07-37b566b30adf-combined-ca-bundle\") pod \"ovn-controller-metrics-q4rdb\" (UID: \"ed1dfb51-596d-4779-9d07-37b566b30adf\") " pod="openstack/ovn-controller-metrics-q4rdb" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.489422 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c24x\" (UniqueName: \"kubernetes.io/projected/fac27dde-5617-46e0-a085-0c9b7e97237e-kube-api-access-6c24x\") pod \"dnsmasq-dns-7f896c8c65-p25l6\" (UID: \"fac27dde-5617-46e0-a085-0c9b7e97237e\") " pod="openstack/dnsmasq-dns-7f896c8c65-p25l6" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.489448 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ed1dfb51-596d-4779-9d07-37b566b30adf-ovn-rundir\") pod \"ovn-controller-metrics-q4rdb\" (UID: \"ed1dfb51-596d-4779-9d07-37b566b30adf\") " pod="openstack/ovn-controller-metrics-q4rdb" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.489471 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed1dfb51-596d-4779-9d07-37b566b30adf-config\") pod \"ovn-controller-metrics-q4rdb\" (UID: \"ed1dfb51-596d-4779-9d07-37b566b30adf\") " pod="openstack/ovn-controller-metrics-q4rdb" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.489487 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ed1dfb51-596d-4779-9d07-37b566b30adf-ovs-rundir\") pod \"ovn-controller-metrics-q4rdb\" (UID: \"ed1dfb51-596d-4779-9d07-37b566b30adf\") " pod="openstack/ovn-controller-metrics-q4rdb" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.489509 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fac27dde-5617-46e0-a085-0c9b7e97237e-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-p25l6\" (UID: \"fac27dde-5617-46e0-a085-0c9b7e97237e\") " pod="openstack/dnsmasq-dns-7f896c8c65-p25l6" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.489775 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fac27dde-5617-46e0-a085-0c9b7e97237e-config\") pod \"dnsmasq-dns-7f896c8c65-p25l6\" (UID: \"fac27dde-5617-46e0-a085-0c9b7e97237e\") " pod="openstack/dnsmasq-dns-7f896c8c65-p25l6" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.489858 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed1dfb51-596d-4779-9d07-37b566b30adf-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-q4rdb\" (UID: \"ed1dfb51-596d-4779-9d07-37b566b30adf\") " pod="openstack/ovn-controller-metrics-q4rdb" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.489955 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fac27dde-5617-46e0-a085-0c9b7e97237e-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-p25l6\" (UID: \"fac27dde-5617-46e0-a085-0c9b7e97237e\") " pod="openstack/dnsmasq-dns-7f896c8c65-p25l6" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.490366 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fac27dde-5617-46e0-a085-0c9b7e97237e-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-p25l6\" (UID: \"fac27dde-5617-46e0-a085-0c9b7e97237e\") " pod="openstack/dnsmasq-dns-7f896c8c65-p25l6" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.490938 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fac27dde-5617-46e0-a085-0c9b7e97237e-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-p25l6\" (UID: \"fac27dde-5617-46e0-a085-0c9b7e97237e\") " pod="openstack/dnsmasq-dns-7f896c8c65-p25l6" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.491018 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fac27dde-5617-46e0-a085-0c9b7e97237e-config\") pod \"dnsmasq-dns-7f896c8c65-p25l6\" (UID: \"fac27dde-5617-46e0-a085-0c9b7e97237e\") " pod="openstack/dnsmasq-dns-7f896c8c65-p25l6" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.512545 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c24x\" (UniqueName: \"kubernetes.io/projected/fac27dde-5617-46e0-a085-0c9b7e97237e-kube-api-access-6c24x\") pod \"dnsmasq-dns-7f896c8c65-p25l6\" (UID: \"fac27dde-5617-46e0-a085-0c9b7e97237e\") " pod="openstack/dnsmasq-dns-7f896c8c65-p25l6" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.592085 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed1dfb51-596d-4779-9d07-37b566b30adf-combined-ca-bundle\") pod \"ovn-controller-metrics-q4rdb\" (UID: \"ed1dfb51-596d-4779-9d07-37b566b30adf\") " pod="openstack/ovn-controller-metrics-q4rdb" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.592310 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ed1dfb51-596d-4779-9d07-37b566b30adf-ovn-rundir\") pod \"ovn-controller-metrics-q4rdb\" (UID: \"ed1dfb51-596d-4779-9d07-37b566b30adf\") " pod="openstack/ovn-controller-metrics-q4rdb" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.592357 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed1dfb51-596d-4779-9d07-37b566b30adf-config\") pod \"ovn-controller-metrics-q4rdb\" (UID: \"ed1dfb51-596d-4779-9d07-37b566b30adf\") " pod="openstack/ovn-controller-metrics-q4rdb" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.592382 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ed1dfb51-596d-4779-9d07-37b566b30adf-ovs-rundir\") pod \"ovn-controller-metrics-q4rdb\" (UID: \"ed1dfb51-596d-4779-9d07-37b566b30adf\") " pod="openstack/ovn-controller-metrics-q4rdb" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.592427 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed1dfb51-596d-4779-9d07-37b566b30adf-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-q4rdb\" (UID: \"ed1dfb51-596d-4779-9d07-37b566b30adf\") " pod="openstack/ovn-controller-metrics-q4rdb" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.592521 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpknj\" (UniqueName: \"kubernetes.io/projected/ed1dfb51-596d-4779-9d07-37b566b30adf-kube-api-access-xpknj\") pod \"ovn-controller-metrics-q4rdb\" (UID: \"ed1dfb51-596d-4779-9d07-37b566b30adf\") " pod="openstack/ovn-controller-metrics-q4rdb" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.592642 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ed1dfb51-596d-4779-9d07-37b566b30adf-ovn-rundir\") pod \"ovn-controller-metrics-q4rdb\" (UID: \"ed1dfb51-596d-4779-9d07-37b566b30adf\") " pod="openstack/ovn-controller-metrics-q4rdb" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.593009 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ed1dfb51-596d-4779-9d07-37b566b30adf-ovs-rundir\") pod \"ovn-controller-metrics-q4rdb\" (UID: \"ed1dfb51-596d-4779-9d07-37b566b30adf\") " pod="openstack/ovn-controller-metrics-q4rdb" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.593446 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed1dfb51-596d-4779-9d07-37b566b30adf-config\") pod \"ovn-controller-metrics-q4rdb\" (UID: \"ed1dfb51-596d-4779-9d07-37b566b30adf\") " pod="openstack/ovn-controller-metrics-q4rdb" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.595224 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed1dfb51-596d-4779-9d07-37b566b30adf-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-q4rdb\" (UID: \"ed1dfb51-596d-4779-9d07-37b566b30adf\") " pod="openstack/ovn-controller-metrics-q4rdb" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.595524 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed1dfb51-596d-4779-9d07-37b566b30adf-combined-ca-bundle\") pod \"ovn-controller-metrics-q4rdb\" (UID: \"ed1dfb51-596d-4779-9d07-37b566b30adf\") " pod="openstack/ovn-controller-metrics-q4rdb" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.616645 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpknj\" (UniqueName: \"kubernetes.io/projected/ed1dfb51-596d-4779-9d07-37b566b30adf-kube-api-access-xpknj\") pod \"ovn-controller-metrics-q4rdb\" (UID: \"ed1dfb51-596d-4779-9d07-37b566b30adf\") " pod="openstack/ovn-controller-metrics-q4rdb" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.642083 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-p25l6" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.683855 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-p25l6"] Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.691566 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-q4rdb" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.719065 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-2jtj8"] Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.720585 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-2jtj8" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.722722 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.727349 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-2jtj8"] Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.802145 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f589775b-f52a-4aaa-b009-97dd8e324eda-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-2jtj8\" (UID: \"f589775b-f52a-4aaa-b009-97dd8e324eda\") " pod="openstack/dnsmasq-dns-86db49b7ff-2jtj8" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.802214 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f589775b-f52a-4aaa-b009-97dd8e324eda-config\") pod \"dnsmasq-dns-86db49b7ff-2jtj8\" (UID: \"f589775b-f52a-4aaa-b009-97dd8e324eda\") " pod="openstack/dnsmasq-dns-86db49b7ff-2jtj8" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.802256 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f589775b-f52a-4aaa-b009-97dd8e324eda-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-2jtj8\" (UID: \"f589775b-f52a-4aaa-b009-97dd8e324eda\") " pod="openstack/dnsmasq-dns-86db49b7ff-2jtj8" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.802294 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcrwn\" (UniqueName: \"kubernetes.io/projected/f589775b-f52a-4aaa-b009-97dd8e324eda-kube-api-access-xcrwn\") pod \"dnsmasq-dns-86db49b7ff-2jtj8\" (UID: \"f589775b-f52a-4aaa-b009-97dd8e324eda\") " pod="openstack/dnsmasq-dns-86db49b7ff-2jtj8" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.802315 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f589775b-f52a-4aaa-b009-97dd8e324eda-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-2jtj8\" (UID: \"f589775b-f52a-4aaa-b009-97dd8e324eda\") " pod="openstack/dnsmasq-dns-86db49b7ff-2jtj8" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.903692 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f589775b-f52a-4aaa-b009-97dd8e324eda-config\") pod \"dnsmasq-dns-86db49b7ff-2jtj8\" (UID: \"f589775b-f52a-4aaa-b009-97dd8e324eda\") " pod="openstack/dnsmasq-dns-86db49b7ff-2jtj8" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.903787 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f589775b-f52a-4aaa-b009-97dd8e324eda-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-2jtj8\" (UID: \"f589775b-f52a-4aaa-b009-97dd8e324eda\") " pod="openstack/dnsmasq-dns-86db49b7ff-2jtj8" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.903851 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcrwn\" (UniqueName: \"kubernetes.io/projected/f589775b-f52a-4aaa-b009-97dd8e324eda-kube-api-access-xcrwn\") pod \"dnsmasq-dns-86db49b7ff-2jtj8\" (UID: \"f589775b-f52a-4aaa-b009-97dd8e324eda\") " pod="openstack/dnsmasq-dns-86db49b7ff-2jtj8" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.903887 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f589775b-f52a-4aaa-b009-97dd8e324eda-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-2jtj8\" (UID: \"f589775b-f52a-4aaa-b009-97dd8e324eda\") " pod="openstack/dnsmasq-dns-86db49b7ff-2jtj8" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.903943 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f589775b-f52a-4aaa-b009-97dd8e324eda-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-2jtj8\" (UID: \"f589775b-f52a-4aaa-b009-97dd8e324eda\") " pod="openstack/dnsmasq-dns-86db49b7ff-2jtj8" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.904771 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f589775b-f52a-4aaa-b009-97dd8e324eda-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-2jtj8\" (UID: \"f589775b-f52a-4aaa-b009-97dd8e324eda\") " pod="openstack/dnsmasq-dns-86db49b7ff-2jtj8" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.904774 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f589775b-f52a-4aaa-b009-97dd8e324eda-config\") pod \"dnsmasq-dns-86db49b7ff-2jtj8\" (UID: \"f589775b-f52a-4aaa-b009-97dd8e324eda\") " pod="openstack/dnsmasq-dns-86db49b7ff-2jtj8" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.905382 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f589775b-f52a-4aaa-b009-97dd8e324eda-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-2jtj8\" (UID: \"f589775b-f52a-4aaa-b009-97dd8e324eda\") " pod="openstack/dnsmasq-dns-86db49b7ff-2jtj8" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.907837 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f589775b-f52a-4aaa-b009-97dd8e324eda-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-2jtj8\" (UID: \"f589775b-f52a-4aaa-b009-97dd8e324eda\") " pod="openstack/dnsmasq-dns-86db49b7ff-2jtj8" Dec 10 12:33:55 crc kubenswrapper[4689]: I1210 12:33:55.921884 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcrwn\" (UniqueName: \"kubernetes.io/projected/f589775b-f52a-4aaa-b009-97dd8e324eda-kube-api-access-xcrwn\") pod \"dnsmasq-dns-86db49b7ff-2jtj8\" (UID: \"f589775b-f52a-4aaa-b009-97dd8e324eda\") " pod="openstack/dnsmasq-dns-86db49b7ff-2jtj8" Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.040430 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.080517 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-2jtj8" Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.119758 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.140044 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-p25l6"] Dec 10 12:33:56 crc kubenswrapper[4689]: W1210 12:33:56.155849 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfac27dde_5617_46e0_a085_0c9b7e97237e.slice/crio-c1ceed53784b93ec01a0396b1897f3c9999ab09da911f6f423ea2add09ad628b WatchSource:0}: Error finding container c1ceed53784b93ec01a0396b1897f3c9999ab09da911f6f423ea2add09ad628b: Status 404 returned error can't find the container with id c1ceed53784b93ec01a0396b1897f3c9999ab09da911f6f423ea2add09ad628b Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.238392 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-q4rdb"] Dec 10 12:33:56 crc kubenswrapper[4689]: W1210 12:33:56.251389 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded1dfb51_596d_4779_9d07_37b566b30adf.slice/crio-ef71cd145b678e8365309d4da68352fe328e4b117d6e5b85ab930403bd87d0c0 WatchSource:0}: Error finding container ef71cd145b678e8365309d4da68352fe328e4b117d6e5b85ab930403bd87d0c0: Status 404 returned error can't find the container with id ef71cd145b678e8365309d4da68352fe328e4b117d6e5b85ab930403bd87d0c0 Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.338872 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.340125 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.345655 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.346229 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.346290 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.346409 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-tj8fp" Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.346566 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.418032 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d70ed4a-58eb-456e-bd2a-9b3c199a94bf-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1d70ed4a-58eb-456e-bd2a-9b3c199a94bf\") " pod="openstack/ovn-northd-0" Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.418395 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d70ed4a-58eb-456e-bd2a-9b3c199a94bf-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1d70ed4a-58eb-456e-bd2a-9b3c199a94bf\") " pod="openstack/ovn-northd-0" Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.418497 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6n42\" (UniqueName: \"kubernetes.io/projected/1d70ed4a-58eb-456e-bd2a-9b3c199a94bf-kube-api-access-t6n42\") pod \"ovn-northd-0\" (UID: \"1d70ed4a-58eb-456e-bd2a-9b3c199a94bf\") " pod="openstack/ovn-northd-0" Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.418583 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d70ed4a-58eb-456e-bd2a-9b3c199a94bf-config\") pod \"ovn-northd-0\" (UID: \"1d70ed4a-58eb-456e-bd2a-9b3c199a94bf\") " pod="openstack/ovn-northd-0" Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.418739 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d70ed4a-58eb-456e-bd2a-9b3c199a94bf-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1d70ed4a-58eb-456e-bd2a-9b3c199a94bf\") " pod="openstack/ovn-northd-0" Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.418844 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d70ed4a-58eb-456e-bd2a-9b3c199a94bf-scripts\") pod \"ovn-northd-0\" (UID: \"1d70ed4a-58eb-456e-bd2a-9b3c199a94bf\") " pod="openstack/ovn-northd-0" Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.418916 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1d70ed4a-58eb-456e-bd2a-9b3c199a94bf-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1d70ed4a-58eb-456e-bd2a-9b3c199a94bf\") " pod="openstack/ovn-northd-0" Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.520831 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1d70ed4a-58eb-456e-bd2a-9b3c199a94bf-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1d70ed4a-58eb-456e-bd2a-9b3c199a94bf\") " pod="openstack/ovn-northd-0" Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.521197 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d70ed4a-58eb-456e-bd2a-9b3c199a94bf-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1d70ed4a-58eb-456e-bd2a-9b3c199a94bf\") " pod="openstack/ovn-northd-0" Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.521336 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d70ed4a-58eb-456e-bd2a-9b3c199a94bf-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1d70ed4a-58eb-456e-bd2a-9b3c199a94bf\") " pod="openstack/ovn-northd-0" Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.521374 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6n42\" (UniqueName: \"kubernetes.io/projected/1d70ed4a-58eb-456e-bd2a-9b3c199a94bf-kube-api-access-t6n42\") pod \"ovn-northd-0\" (UID: \"1d70ed4a-58eb-456e-bd2a-9b3c199a94bf\") " pod="openstack/ovn-northd-0" Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.521417 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d70ed4a-58eb-456e-bd2a-9b3c199a94bf-config\") pod \"ovn-northd-0\" (UID: \"1d70ed4a-58eb-456e-bd2a-9b3c199a94bf\") " pod="openstack/ovn-northd-0" Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.521489 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d70ed4a-58eb-456e-bd2a-9b3c199a94bf-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1d70ed4a-58eb-456e-bd2a-9b3c199a94bf\") " pod="openstack/ovn-northd-0" Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.521502 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1d70ed4a-58eb-456e-bd2a-9b3c199a94bf-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1d70ed4a-58eb-456e-bd2a-9b3c199a94bf\") " pod="openstack/ovn-northd-0" Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.521610 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d70ed4a-58eb-456e-bd2a-9b3c199a94bf-scripts\") pod \"ovn-northd-0\" (UID: \"1d70ed4a-58eb-456e-bd2a-9b3c199a94bf\") " pod="openstack/ovn-northd-0" Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.525100 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d70ed4a-58eb-456e-bd2a-9b3c199a94bf-scripts\") pod \"ovn-northd-0\" (UID: \"1d70ed4a-58eb-456e-bd2a-9b3c199a94bf\") " pod="openstack/ovn-northd-0" Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.526631 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d70ed4a-58eb-456e-bd2a-9b3c199a94bf-config\") pod \"ovn-northd-0\" (UID: \"1d70ed4a-58eb-456e-bd2a-9b3c199a94bf\") " pod="openstack/ovn-northd-0" Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.527878 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d70ed4a-58eb-456e-bd2a-9b3c199a94bf-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1d70ed4a-58eb-456e-bd2a-9b3c199a94bf\") " pod="openstack/ovn-northd-0" Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.536267 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d70ed4a-58eb-456e-bd2a-9b3c199a94bf-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1d70ed4a-58eb-456e-bd2a-9b3c199a94bf\") " pod="openstack/ovn-northd-0" Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.538797 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d70ed4a-58eb-456e-bd2a-9b3c199a94bf-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1d70ed4a-58eb-456e-bd2a-9b3c199a94bf\") " pod="openstack/ovn-northd-0" Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.553839 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6n42\" (UniqueName: \"kubernetes.io/projected/1d70ed4a-58eb-456e-bd2a-9b3c199a94bf-kube-api-access-t6n42\") pod \"ovn-northd-0\" (UID: \"1d70ed4a-58eb-456e-bd2a-9b3c199a94bf\") " pod="openstack/ovn-northd-0" Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.585495 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-2jtj8"] Dec 10 12:33:56 crc kubenswrapper[4689]: I1210 12:33:56.671808 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 10 12:33:57 crc kubenswrapper[4689]: I1210 12:33:57.048784 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-p25l6" event={"ID":"fac27dde-5617-46e0-a085-0c9b7e97237e","Type":"ContainerStarted","Data":"c1ceed53784b93ec01a0396b1897f3c9999ab09da911f6f423ea2add09ad628b"} Dec 10 12:33:57 crc kubenswrapper[4689]: I1210 12:33:57.050721 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-q4rdb" event={"ID":"ed1dfb51-596d-4779-9d07-37b566b30adf","Type":"ContainerStarted","Data":"ef71cd145b678e8365309d4da68352fe328e4b117d6e5b85ab930403bd87d0c0"} Dec 10 12:33:57 crc kubenswrapper[4689]: I1210 12:33:57.055612 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-2jtj8" event={"ID":"f589775b-f52a-4aaa-b009-97dd8e324eda","Type":"ContainerStarted","Data":"3463548970d0c9cca0fd5e8e529febe990d939d52aadd5db9b2460456393c5b5"} Dec 10 12:33:57 crc kubenswrapper[4689]: I1210 12:33:57.181880 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 10 12:33:58 crc kubenswrapper[4689]: I1210 12:33:58.024151 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 10 12:33:58 crc kubenswrapper[4689]: I1210 12:33:58.024555 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 10 12:33:58 crc kubenswrapper[4689]: I1210 12:33:58.063190 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1d70ed4a-58eb-456e-bd2a-9b3c199a94bf","Type":"ContainerStarted","Data":"36905c5d1ecbcb23ee8d482431d69f5b9bf9f7f205b45e649e7da8adc95faca1"} Dec 10 12:33:59 crc kubenswrapper[4689]: I1210 12:33:59.598099 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 10 12:33:59 crc kubenswrapper[4689]: I1210 12:33:59.598388 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 10 12:34:00 crc kubenswrapper[4689]: I1210 12:34:00.083349 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-p25l6" event={"ID":"fac27dde-5617-46e0-a085-0c9b7e97237e","Type":"ContainerStarted","Data":"66a43969ffb45cf60440fabcc7534aecc236d48d61abf3087587cbed0285bfa8"} Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.094123 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-q4rdb" event={"ID":"ed1dfb51-596d-4779-9d07-37b566b30adf","Type":"ContainerStarted","Data":"6cebc4f1d6c517d7730461ab0b22b251a78e4de09208279a200911d2651566e8"} Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.098553 4689 generic.go:334] "Generic (PLEG): container finished" podID="f589775b-f52a-4aaa-b009-97dd8e324eda" containerID="aa5f789598953c063960426f424b249d09b1616272a0ad9f5bbffe810541020e" exitCode=0 Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.098637 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-2jtj8" event={"ID":"f589775b-f52a-4aaa-b009-97dd8e324eda","Type":"ContainerDied","Data":"aa5f789598953c063960426f424b249d09b1616272a0ad9f5bbffe810541020e"} Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.101281 4689 generic.go:334] "Generic (PLEG): container finished" podID="fac27dde-5617-46e0-a085-0c9b7e97237e" containerID="66a43969ffb45cf60440fabcc7534aecc236d48d61abf3087587cbed0285bfa8" exitCode=0 Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.101331 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-p25l6" event={"ID":"fac27dde-5617-46e0-a085-0c9b7e97237e","Type":"ContainerDied","Data":"66a43969ffb45cf60440fabcc7534aecc236d48d61abf3087587cbed0285bfa8"} Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.125167 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-q4rdb" podStartSLOduration=6.125131638 podStartE2EDuration="6.125131638s" podCreationTimestamp="2025-12-10 12:33:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:34:01.115438407 +0000 UTC m=+1108.903519545" watchObservedRunningTime="2025-12-10 12:34:01.125131638 +0000 UTC m=+1108.913212776" Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.566948 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-2jtj8"] Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.577494 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-xrcb2"] Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.578750 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-xrcb2" Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.613842 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.618539 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-xrcb2"] Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.629042 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-p25l6" Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.714513 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e86fae08-d350-4939-a3fb-1131e3f5e5a2-config\") pod \"dnsmasq-dns-698758b865-xrcb2\" (UID: \"e86fae08-d350-4939-a3fb-1131e3f5e5a2\") " pod="openstack/dnsmasq-dns-698758b865-xrcb2" Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.714559 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e86fae08-d350-4939-a3fb-1131e3f5e5a2-dns-svc\") pod \"dnsmasq-dns-698758b865-xrcb2\" (UID: \"e86fae08-d350-4939-a3fb-1131e3f5e5a2\") " pod="openstack/dnsmasq-dns-698758b865-xrcb2" Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.714584 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phlw9\" (UniqueName: \"kubernetes.io/projected/e86fae08-d350-4939-a3fb-1131e3f5e5a2-kube-api-access-phlw9\") pod \"dnsmasq-dns-698758b865-xrcb2\" (UID: \"e86fae08-d350-4939-a3fb-1131e3f5e5a2\") " pod="openstack/dnsmasq-dns-698758b865-xrcb2" Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.714607 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e86fae08-d350-4939-a3fb-1131e3f5e5a2-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-xrcb2\" (UID: \"e86fae08-d350-4939-a3fb-1131e3f5e5a2\") " pod="openstack/dnsmasq-dns-698758b865-xrcb2" Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.714700 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e86fae08-d350-4939-a3fb-1131e3f5e5a2-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-xrcb2\" (UID: \"e86fae08-d350-4939-a3fb-1131e3f5e5a2\") " pod="openstack/dnsmasq-dns-698758b865-xrcb2" Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.820622 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fac27dde-5617-46e0-a085-0c9b7e97237e-config\") pod \"fac27dde-5617-46e0-a085-0c9b7e97237e\" (UID: \"fac27dde-5617-46e0-a085-0c9b7e97237e\") " Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.821924 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fac27dde-5617-46e0-a085-0c9b7e97237e-dns-svc\") pod \"fac27dde-5617-46e0-a085-0c9b7e97237e\" (UID: \"fac27dde-5617-46e0-a085-0c9b7e97237e\") " Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.822110 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fac27dde-5617-46e0-a085-0c9b7e97237e-ovsdbserver-sb\") pod \"fac27dde-5617-46e0-a085-0c9b7e97237e\" (UID: \"fac27dde-5617-46e0-a085-0c9b7e97237e\") " Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.822311 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c24x\" (UniqueName: \"kubernetes.io/projected/fac27dde-5617-46e0-a085-0c9b7e97237e-kube-api-access-6c24x\") pod \"fac27dde-5617-46e0-a085-0c9b7e97237e\" (UID: \"fac27dde-5617-46e0-a085-0c9b7e97237e\") " Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.822643 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e86fae08-d350-4939-a3fb-1131e3f5e5a2-dns-svc\") pod \"dnsmasq-dns-698758b865-xrcb2\" (UID: \"e86fae08-d350-4939-a3fb-1131e3f5e5a2\") " pod="openstack/dnsmasq-dns-698758b865-xrcb2" Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.822825 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phlw9\" (UniqueName: \"kubernetes.io/projected/e86fae08-d350-4939-a3fb-1131e3f5e5a2-kube-api-access-phlw9\") pod \"dnsmasq-dns-698758b865-xrcb2\" (UID: \"e86fae08-d350-4939-a3fb-1131e3f5e5a2\") " pod="openstack/dnsmasq-dns-698758b865-xrcb2" Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.822948 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e86fae08-d350-4939-a3fb-1131e3f5e5a2-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-xrcb2\" (UID: \"e86fae08-d350-4939-a3fb-1131e3f5e5a2\") " pod="openstack/dnsmasq-dns-698758b865-xrcb2" Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.823096 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e86fae08-d350-4939-a3fb-1131e3f5e5a2-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-xrcb2\" (UID: \"e86fae08-d350-4939-a3fb-1131e3f5e5a2\") " pod="openstack/dnsmasq-dns-698758b865-xrcb2" Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.823397 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e86fae08-d350-4939-a3fb-1131e3f5e5a2-config\") pod \"dnsmasq-dns-698758b865-xrcb2\" (UID: \"e86fae08-d350-4939-a3fb-1131e3f5e5a2\") " pod="openstack/dnsmasq-dns-698758b865-xrcb2" Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.823768 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e86fae08-d350-4939-a3fb-1131e3f5e5a2-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-xrcb2\" (UID: \"e86fae08-d350-4939-a3fb-1131e3f5e5a2\") " pod="openstack/dnsmasq-dns-698758b865-xrcb2" Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.824151 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e86fae08-d350-4939-a3fb-1131e3f5e5a2-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-xrcb2\" (UID: \"e86fae08-d350-4939-a3fb-1131e3f5e5a2\") " pod="openstack/dnsmasq-dns-698758b865-xrcb2" Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.824735 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e86fae08-d350-4939-a3fb-1131e3f5e5a2-config\") pod \"dnsmasq-dns-698758b865-xrcb2\" (UID: \"e86fae08-d350-4939-a3fb-1131e3f5e5a2\") " pod="openstack/dnsmasq-dns-698758b865-xrcb2" Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.827642 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e86fae08-d350-4939-a3fb-1131e3f5e5a2-dns-svc\") pod \"dnsmasq-dns-698758b865-xrcb2\" (UID: \"e86fae08-d350-4939-a3fb-1131e3f5e5a2\") " pod="openstack/dnsmasq-dns-698758b865-xrcb2" Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.837167 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fac27dde-5617-46e0-a085-0c9b7e97237e-kube-api-access-6c24x" (OuterVolumeSpecName: "kube-api-access-6c24x") pod "fac27dde-5617-46e0-a085-0c9b7e97237e" (UID: "fac27dde-5617-46e0-a085-0c9b7e97237e"). InnerVolumeSpecName "kube-api-access-6c24x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.848847 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fac27dde-5617-46e0-a085-0c9b7e97237e-config" (OuterVolumeSpecName: "config") pod "fac27dde-5617-46e0-a085-0c9b7e97237e" (UID: "fac27dde-5617-46e0-a085-0c9b7e97237e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.848924 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phlw9\" (UniqueName: \"kubernetes.io/projected/e86fae08-d350-4939-a3fb-1131e3f5e5a2-kube-api-access-phlw9\") pod \"dnsmasq-dns-698758b865-xrcb2\" (UID: \"e86fae08-d350-4939-a3fb-1131e3f5e5a2\") " pod="openstack/dnsmasq-dns-698758b865-xrcb2" Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.852434 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fac27dde-5617-46e0-a085-0c9b7e97237e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fac27dde-5617-46e0-a085-0c9b7e97237e" (UID: "fac27dde-5617-46e0-a085-0c9b7e97237e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.855707 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fac27dde-5617-46e0-a085-0c9b7e97237e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fac27dde-5617-46e0-a085-0c9b7e97237e" (UID: "fac27dde-5617-46e0-a085-0c9b7e97237e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.926010 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c24x\" (UniqueName: \"kubernetes.io/projected/fac27dde-5617-46e0-a085-0c9b7e97237e-kube-api-access-6c24x\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.926350 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fac27dde-5617-46e0-a085-0c9b7e97237e-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.926362 4689 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fac27dde-5617-46e0-a085-0c9b7e97237e-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.926371 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fac27dde-5617-46e0-a085-0c9b7e97237e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:01 crc kubenswrapper[4689]: I1210 12:34:01.966745 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-xrcb2" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.117460 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-p25l6" event={"ID":"fac27dde-5617-46e0-a085-0c9b7e97237e","Type":"ContainerDied","Data":"c1ceed53784b93ec01a0396b1897f3c9999ab09da911f6f423ea2add09ad628b"} Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.117803 4689 scope.go:117] "RemoveContainer" containerID="66a43969ffb45cf60440fabcc7534aecc236d48d61abf3087587cbed0285bfa8" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.117935 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-p25l6" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.121594 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1d70ed4a-58eb-456e-bd2a-9b3c199a94bf","Type":"ContainerStarted","Data":"9f7c359dda73c7efa78b32c36f69b831e19bbb37ed50dc1d37dc721df1d352ba"} Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.121640 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1d70ed4a-58eb-456e-bd2a-9b3c199a94bf","Type":"ContainerStarted","Data":"cdb65727f823593983ef54f19e79c0be15e9343e45dd59e3124049065b8167c8"} Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.121660 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.126936 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-2jtj8" podUID="f589775b-f52a-4aaa-b009-97dd8e324eda" containerName="dnsmasq-dns" containerID="cri-o://bf4a42ad45233c50e78bf3dc34c7233888677e477d4d8216b828457438670826" gracePeriod=10 Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.127162 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-2jtj8" event={"ID":"f589775b-f52a-4aaa-b009-97dd8e324eda","Type":"ContainerStarted","Data":"bf4a42ad45233c50e78bf3dc34c7233888677e477d4d8216b828457438670826"} Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.127199 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-2jtj8" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.140865 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.842582358 podStartE2EDuration="6.140848952s" podCreationTimestamp="2025-12-10 12:33:56 +0000 UTC" firstStartedPulling="2025-12-10 12:33:57.187203109 +0000 UTC m=+1104.975284247" lastFinishedPulling="2025-12-10 12:34:01.485469703 +0000 UTC m=+1109.273550841" observedRunningTime="2025-12-10 12:34:02.140362211 +0000 UTC m=+1109.928443359" watchObservedRunningTime="2025-12-10 12:34:02.140848952 +0000 UTC m=+1109.928930090" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.173764 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-2jtj8" podStartSLOduration=7.173743489 podStartE2EDuration="7.173743489s" podCreationTimestamp="2025-12-10 12:33:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:34:02.166217902 +0000 UTC m=+1109.954299040" watchObservedRunningTime="2025-12-10 12:34:02.173743489 +0000 UTC m=+1109.961824637" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.211834 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-p25l6"] Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.216533 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-p25l6"] Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.448262 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-xrcb2"] Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.514100 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fac27dde-5617-46e0-a085-0c9b7e97237e" path="/var/lib/kubelet/pods/fac27dde-5617-46e0-a085-0c9b7e97237e/volumes" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.630377 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-2jtj8" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.642743 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcrwn\" (UniqueName: \"kubernetes.io/projected/f589775b-f52a-4aaa-b009-97dd8e324eda-kube-api-access-xcrwn\") pod \"f589775b-f52a-4aaa-b009-97dd8e324eda\" (UID: \"f589775b-f52a-4aaa-b009-97dd8e324eda\") " Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.642788 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f589775b-f52a-4aaa-b009-97dd8e324eda-ovsdbserver-sb\") pod \"f589775b-f52a-4aaa-b009-97dd8e324eda\" (UID: \"f589775b-f52a-4aaa-b009-97dd8e324eda\") " Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.642810 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f589775b-f52a-4aaa-b009-97dd8e324eda-dns-svc\") pod \"f589775b-f52a-4aaa-b009-97dd8e324eda\" (UID: \"f589775b-f52a-4aaa-b009-97dd8e324eda\") " Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.642833 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f589775b-f52a-4aaa-b009-97dd8e324eda-ovsdbserver-nb\") pod \"f589775b-f52a-4aaa-b009-97dd8e324eda\" (UID: \"f589775b-f52a-4aaa-b009-97dd8e324eda\") " Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.642856 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f589775b-f52a-4aaa-b009-97dd8e324eda-config\") pod \"f589775b-f52a-4aaa-b009-97dd8e324eda\" (UID: \"f589775b-f52a-4aaa-b009-97dd8e324eda\") " Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.647363 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f589775b-f52a-4aaa-b009-97dd8e324eda-kube-api-access-xcrwn" (OuterVolumeSpecName: "kube-api-access-xcrwn") pod "f589775b-f52a-4aaa-b009-97dd8e324eda" (UID: "f589775b-f52a-4aaa-b009-97dd8e324eda"). InnerVolumeSpecName "kube-api-access-xcrwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.707535 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 10 12:34:02 crc kubenswrapper[4689]: E1210 12:34:02.707870 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fac27dde-5617-46e0-a085-0c9b7e97237e" containerName="init" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.707888 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="fac27dde-5617-46e0-a085-0c9b7e97237e" containerName="init" Dec 10 12:34:02 crc kubenswrapper[4689]: E1210 12:34:02.707896 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f589775b-f52a-4aaa-b009-97dd8e324eda" containerName="dnsmasq-dns" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.707903 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f589775b-f52a-4aaa-b009-97dd8e324eda" containerName="dnsmasq-dns" Dec 10 12:34:02 crc kubenswrapper[4689]: E1210 12:34:02.707920 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f589775b-f52a-4aaa-b009-97dd8e324eda" containerName="init" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.707926 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f589775b-f52a-4aaa-b009-97dd8e324eda" containerName="init" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.708177 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="f589775b-f52a-4aaa-b009-97dd8e324eda" containerName="dnsmasq-dns" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.708191 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="fac27dde-5617-46e0-a085-0c9b7e97237e" containerName="init" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.713603 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f589775b-f52a-4aaa-b009-97dd8e324eda-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f589775b-f52a-4aaa-b009-97dd8e324eda" (UID: "f589775b-f52a-4aaa-b009-97dd8e324eda"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.715232 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.721737 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.728133 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.728378 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f589775b-f52a-4aaa-b009-97dd8e324eda-config" (OuterVolumeSpecName: "config") pod "f589775b-f52a-4aaa-b009-97dd8e324eda" (UID: "f589775b-f52a-4aaa-b009-97dd8e324eda"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.728485 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.728645 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-6sbs9" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.728672 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.728778 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f589775b-f52a-4aaa-b009-97dd8e324eda-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f589775b-f52a-4aaa-b009-97dd8e324eda" (UID: "f589775b-f52a-4aaa-b009-97dd8e324eda"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.732735 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f589775b-f52a-4aaa-b009-97dd8e324eda-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f589775b-f52a-4aaa-b009-97dd8e324eda" (UID: "f589775b-f52a-4aaa-b009-97dd8e324eda"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.744014 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcrwn\" (UniqueName: \"kubernetes.io/projected/f589775b-f52a-4aaa-b009-97dd8e324eda-kube-api-access-xcrwn\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.744043 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f589775b-f52a-4aaa-b009-97dd8e324eda-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.744052 4689 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f589775b-f52a-4aaa-b009-97dd8e324eda-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.744060 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f589775b-f52a-4aaa-b009-97dd8e324eda-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.744068 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f589775b-f52a-4aaa-b009-97dd8e324eda-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.807331 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.845484 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c4b54476-e438-46d8-b234-c8f661f5c26f-lock\") pod \"swift-storage-0\" (UID: \"c4b54476-e438-46d8-b234-c8f661f5c26f\") " pod="openstack/swift-storage-0" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.845528 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st8ck\" (UniqueName: \"kubernetes.io/projected/c4b54476-e438-46d8-b234-c8f661f5c26f-kube-api-access-st8ck\") pod \"swift-storage-0\" (UID: \"c4b54476-e438-46d8-b234-c8f661f5c26f\") " pod="openstack/swift-storage-0" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.845554 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c4b54476-e438-46d8-b234-c8f661f5c26f-etc-swift\") pod \"swift-storage-0\" (UID: \"c4b54476-e438-46d8-b234-c8f661f5c26f\") " pod="openstack/swift-storage-0" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.845596 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"c4b54476-e438-46d8-b234-c8f661f5c26f\") " pod="openstack/swift-storage-0" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.845666 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c4b54476-e438-46d8-b234-c8f661f5c26f-cache\") pod \"swift-storage-0\" (UID: \"c4b54476-e438-46d8-b234-c8f661f5c26f\") " pod="openstack/swift-storage-0" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.879723 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.947286 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"c4b54476-e438-46d8-b234-c8f661f5c26f\") " pod="openstack/swift-storage-0" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.947627 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c4b54476-e438-46d8-b234-c8f661f5c26f-cache\") pod \"swift-storage-0\" (UID: \"c4b54476-e438-46d8-b234-c8f661f5c26f\") " pod="openstack/swift-storage-0" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.947680 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c4b54476-e438-46d8-b234-c8f661f5c26f-lock\") pod \"swift-storage-0\" (UID: \"c4b54476-e438-46d8-b234-c8f661f5c26f\") " pod="openstack/swift-storage-0" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.947702 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st8ck\" (UniqueName: \"kubernetes.io/projected/c4b54476-e438-46d8-b234-c8f661f5c26f-kube-api-access-st8ck\") pod \"swift-storage-0\" (UID: \"c4b54476-e438-46d8-b234-c8f661f5c26f\") " pod="openstack/swift-storage-0" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.947723 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c4b54476-e438-46d8-b234-c8f661f5c26f-etc-swift\") pod \"swift-storage-0\" (UID: \"c4b54476-e438-46d8-b234-c8f661f5c26f\") " pod="openstack/swift-storage-0" Dec 10 12:34:02 crc kubenswrapper[4689]: E1210 12:34:02.948274 4689 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 10 12:34:02 crc kubenswrapper[4689]: E1210 12:34:02.948296 4689 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 10 12:34:02 crc kubenswrapper[4689]: E1210 12:34:02.948342 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c4b54476-e438-46d8-b234-c8f661f5c26f-etc-swift podName:c4b54476-e438-46d8-b234-c8f661f5c26f nodeName:}" failed. No retries permitted until 2025-12-10 12:34:03.448327428 +0000 UTC m=+1111.236408566 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c4b54476-e438-46d8-b234-c8f661f5c26f-etc-swift") pod "swift-storage-0" (UID: "c4b54476-e438-46d8-b234-c8f661f5c26f") : configmap "swift-ring-files" not found Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.948566 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c4b54476-e438-46d8-b234-c8f661f5c26f-lock\") pod \"swift-storage-0\" (UID: \"c4b54476-e438-46d8-b234-c8f661f5c26f\") " pod="openstack/swift-storage-0" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.948895 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c4b54476-e438-46d8-b234-c8f661f5c26f-cache\") pod \"swift-storage-0\" (UID: \"c4b54476-e438-46d8-b234-c8f661f5c26f\") " pod="openstack/swift-storage-0" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.949024 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"c4b54476-e438-46d8-b234-c8f661f5c26f\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/swift-storage-0" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.973215 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"c4b54476-e438-46d8-b234-c8f661f5c26f\") " pod="openstack/swift-storage-0" Dec 10 12:34:02 crc kubenswrapper[4689]: I1210 12:34:02.974214 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st8ck\" (UniqueName: \"kubernetes.io/projected/c4b54476-e438-46d8-b234-c8f661f5c26f-kube-api-access-st8ck\") pod \"swift-storage-0\" (UID: \"c4b54476-e438-46d8-b234-c8f661f5c26f\") " pod="openstack/swift-storage-0" Dec 10 12:34:03 crc kubenswrapper[4689]: I1210 12:34:03.138521 4689 generic.go:334] "Generic (PLEG): container finished" podID="e86fae08-d350-4939-a3fb-1131e3f5e5a2" containerID="d13ced407fdd31f5941494b0fab63411f2b5403ab560a149f2f0252fbbda38ea" exitCode=0 Dec 10 12:34:03 crc kubenswrapper[4689]: I1210 12:34:03.138615 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-xrcb2" event={"ID":"e86fae08-d350-4939-a3fb-1131e3f5e5a2","Type":"ContainerDied","Data":"d13ced407fdd31f5941494b0fab63411f2b5403ab560a149f2f0252fbbda38ea"} Dec 10 12:34:03 crc kubenswrapper[4689]: I1210 12:34:03.138658 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-xrcb2" event={"ID":"e86fae08-d350-4939-a3fb-1131e3f5e5a2","Type":"ContainerStarted","Data":"bd5278b86ee9bae9c8d7c2bb6d428658eface77e98e6a0be57e77297f5e12cda"} Dec 10 12:34:03 crc kubenswrapper[4689]: I1210 12:34:03.142824 4689 generic.go:334] "Generic (PLEG): container finished" podID="f589775b-f52a-4aaa-b009-97dd8e324eda" containerID="bf4a42ad45233c50e78bf3dc34c7233888677e477d4d8216b828457438670826" exitCode=0 Dec 10 12:34:03 crc kubenswrapper[4689]: I1210 12:34:03.142907 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-2jtj8" event={"ID":"f589775b-f52a-4aaa-b009-97dd8e324eda","Type":"ContainerDied","Data":"bf4a42ad45233c50e78bf3dc34c7233888677e477d4d8216b828457438670826"} Dec 10 12:34:03 crc kubenswrapper[4689]: I1210 12:34:03.142941 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-2jtj8" event={"ID":"f589775b-f52a-4aaa-b009-97dd8e324eda","Type":"ContainerDied","Data":"3463548970d0c9cca0fd5e8e529febe990d939d52aadd5db9b2460456393c5b5"} Dec 10 12:34:03 crc kubenswrapper[4689]: I1210 12:34:03.143001 4689 scope.go:117] "RemoveContainer" containerID="bf4a42ad45233c50e78bf3dc34c7233888677e477d4d8216b828457438670826" Dec 10 12:34:03 crc kubenswrapper[4689]: I1210 12:34:03.143141 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-2jtj8" Dec 10 12:34:03 crc kubenswrapper[4689]: I1210 12:34:03.178951 4689 scope.go:117] "RemoveContainer" containerID="aa5f789598953c063960426f424b249d09b1616272a0ad9f5bbffe810541020e" Dec 10 12:34:03 crc kubenswrapper[4689]: I1210 12:34:03.201437 4689 scope.go:117] "RemoveContainer" containerID="bf4a42ad45233c50e78bf3dc34c7233888677e477d4d8216b828457438670826" Dec 10 12:34:03 crc kubenswrapper[4689]: E1210 12:34:03.205306 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf4a42ad45233c50e78bf3dc34c7233888677e477d4d8216b828457438670826\": container with ID starting with bf4a42ad45233c50e78bf3dc34c7233888677e477d4d8216b828457438670826 not found: ID does not exist" containerID="bf4a42ad45233c50e78bf3dc34c7233888677e477d4d8216b828457438670826" Dec 10 12:34:03 crc kubenswrapper[4689]: I1210 12:34:03.205381 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf4a42ad45233c50e78bf3dc34c7233888677e477d4d8216b828457438670826"} err="failed to get container status \"bf4a42ad45233c50e78bf3dc34c7233888677e477d4d8216b828457438670826\": rpc error: code = NotFound desc = could not find container \"bf4a42ad45233c50e78bf3dc34c7233888677e477d4d8216b828457438670826\": container with ID starting with bf4a42ad45233c50e78bf3dc34c7233888677e477d4d8216b828457438670826 not found: ID does not exist" Dec 10 12:34:03 crc kubenswrapper[4689]: I1210 12:34:03.205440 4689 scope.go:117] "RemoveContainer" containerID="aa5f789598953c063960426f424b249d09b1616272a0ad9f5bbffe810541020e" Dec 10 12:34:03 crc kubenswrapper[4689]: E1210 12:34:03.206031 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa5f789598953c063960426f424b249d09b1616272a0ad9f5bbffe810541020e\": container with ID starting with aa5f789598953c063960426f424b249d09b1616272a0ad9f5bbffe810541020e not found: ID does not exist" containerID="aa5f789598953c063960426f424b249d09b1616272a0ad9f5bbffe810541020e" Dec 10 12:34:03 crc kubenswrapper[4689]: I1210 12:34:03.206119 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa5f789598953c063960426f424b249d09b1616272a0ad9f5bbffe810541020e"} err="failed to get container status \"aa5f789598953c063960426f424b249d09b1616272a0ad9f5bbffe810541020e\": rpc error: code = NotFound desc = could not find container \"aa5f789598953c063960426f424b249d09b1616272a0ad9f5bbffe810541020e\": container with ID starting with aa5f789598953c063960426f424b249d09b1616272a0ad9f5bbffe810541020e not found: ID does not exist" Dec 10 12:34:03 crc kubenswrapper[4689]: I1210 12:34:03.214549 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-2jtj8"] Dec 10 12:34:03 crc kubenswrapper[4689]: I1210 12:34:03.222467 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-2jtj8"] Dec 10 12:34:03 crc kubenswrapper[4689]: I1210 12:34:03.454841 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c4b54476-e438-46d8-b234-c8f661f5c26f-etc-swift\") pod \"swift-storage-0\" (UID: \"c4b54476-e438-46d8-b234-c8f661f5c26f\") " pod="openstack/swift-storage-0" Dec 10 12:34:03 crc kubenswrapper[4689]: E1210 12:34:03.455111 4689 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 10 12:34:03 crc kubenswrapper[4689]: E1210 12:34:03.455142 4689 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 10 12:34:03 crc kubenswrapper[4689]: E1210 12:34:03.455198 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c4b54476-e438-46d8-b234-c8f661f5c26f-etc-swift podName:c4b54476-e438-46d8-b234-c8f661f5c26f nodeName:}" failed. No retries permitted until 2025-12-10 12:34:04.455180571 +0000 UTC m=+1112.243261709 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c4b54476-e438-46d8-b234-c8f661f5c26f-etc-swift") pod "swift-storage-0" (UID: "c4b54476-e438-46d8-b234-c8f661f5c26f") : configmap "swift-ring-files" not found Dec 10 12:34:04 crc kubenswrapper[4689]: I1210 12:34:04.166176 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-xrcb2" event={"ID":"e86fae08-d350-4939-a3fb-1131e3f5e5a2","Type":"ContainerStarted","Data":"c297b8623e09a9567b18e84d35e592c9e81d4fb6f3de4116f69e39a981d64543"} Dec 10 12:34:04 crc kubenswrapper[4689]: I1210 12:34:04.166479 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-xrcb2" Dec 10 12:34:04 crc kubenswrapper[4689]: I1210 12:34:04.184481 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-xrcb2" podStartSLOduration=3.1844590249999998 podStartE2EDuration="3.184459025s" podCreationTimestamp="2025-12-10 12:34:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:34:04.183814419 +0000 UTC m=+1111.971895567" watchObservedRunningTime="2025-12-10 12:34:04.184459025 +0000 UTC m=+1111.972540163" Dec 10 12:34:04 crc kubenswrapper[4689]: I1210 12:34:04.269377 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 10 12:34:04 crc kubenswrapper[4689]: I1210 12:34:04.356240 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 10 12:34:04 crc kubenswrapper[4689]: I1210 12:34:04.470770 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c4b54476-e438-46d8-b234-c8f661f5c26f-etc-swift\") pod \"swift-storage-0\" (UID: \"c4b54476-e438-46d8-b234-c8f661f5c26f\") " pod="openstack/swift-storage-0" Dec 10 12:34:04 crc kubenswrapper[4689]: E1210 12:34:04.471024 4689 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 10 12:34:04 crc kubenswrapper[4689]: E1210 12:34:04.471042 4689 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 10 12:34:04 crc kubenswrapper[4689]: E1210 12:34:04.471117 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c4b54476-e438-46d8-b234-c8f661f5c26f-etc-swift podName:c4b54476-e438-46d8-b234-c8f661f5c26f nodeName:}" failed. No retries permitted until 2025-12-10 12:34:06.471073431 +0000 UTC m=+1114.259154569 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c4b54476-e438-46d8-b234-c8f661f5c26f-etc-swift") pod "swift-storage-0" (UID: "c4b54476-e438-46d8-b234-c8f661f5c26f") : configmap "swift-ring-files" not found Dec 10 12:34:04 crc kubenswrapper[4689]: I1210 12:34:04.517383 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f589775b-f52a-4aaa-b009-97dd8e324eda" path="/var/lib/kubelet/pods/f589775b-f52a-4aaa-b009-97dd8e324eda/volumes" Dec 10 12:34:05 crc kubenswrapper[4689]: I1210 12:34:05.034894 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-5b3e-account-create-update-s689x"] Dec 10 12:34:05 crc kubenswrapper[4689]: I1210 12:34:05.037459 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5b3e-account-create-update-s689x" Dec 10 12:34:05 crc kubenswrapper[4689]: I1210 12:34:05.041053 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 10 12:34:05 crc kubenswrapper[4689]: I1210 12:34:05.043772 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-5b3e-account-create-update-s689x"] Dec 10 12:34:05 crc kubenswrapper[4689]: I1210 12:34:05.081292 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhtvv\" (UniqueName: \"kubernetes.io/projected/63315531-f260-4b52-ad96-ea4d24185d13-kube-api-access-fhtvv\") pod \"glance-5b3e-account-create-update-s689x\" (UID: \"63315531-f260-4b52-ad96-ea4d24185d13\") " pod="openstack/glance-5b3e-account-create-update-s689x" Dec 10 12:34:05 crc kubenswrapper[4689]: I1210 12:34:05.081404 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63315531-f260-4b52-ad96-ea4d24185d13-operator-scripts\") pod \"glance-5b3e-account-create-update-s689x\" (UID: \"63315531-f260-4b52-ad96-ea4d24185d13\") " pod="openstack/glance-5b3e-account-create-update-s689x" Dec 10 12:34:05 crc kubenswrapper[4689]: I1210 12:34:05.086015 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-m9tln"] Dec 10 12:34:05 crc kubenswrapper[4689]: I1210 12:34:05.087585 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-m9tln" Dec 10 12:34:05 crc kubenswrapper[4689]: I1210 12:34:05.094716 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-m9tln"] Dec 10 12:34:05 crc kubenswrapper[4689]: I1210 12:34:05.182444 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1ed5623-54eb-4955-800d-273d08df144a-operator-scripts\") pod \"glance-db-create-m9tln\" (UID: \"c1ed5623-54eb-4955-800d-273d08df144a\") " pod="openstack/glance-db-create-m9tln" Dec 10 12:34:05 crc kubenswrapper[4689]: I1210 12:34:05.182724 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m22ql\" (UniqueName: \"kubernetes.io/projected/c1ed5623-54eb-4955-800d-273d08df144a-kube-api-access-m22ql\") pod \"glance-db-create-m9tln\" (UID: \"c1ed5623-54eb-4955-800d-273d08df144a\") " pod="openstack/glance-db-create-m9tln" Dec 10 12:34:05 crc kubenswrapper[4689]: I1210 12:34:05.182834 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhtvv\" (UniqueName: \"kubernetes.io/projected/63315531-f260-4b52-ad96-ea4d24185d13-kube-api-access-fhtvv\") pod \"glance-5b3e-account-create-update-s689x\" (UID: \"63315531-f260-4b52-ad96-ea4d24185d13\") " pod="openstack/glance-5b3e-account-create-update-s689x" Dec 10 12:34:05 crc kubenswrapper[4689]: I1210 12:34:05.182901 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63315531-f260-4b52-ad96-ea4d24185d13-operator-scripts\") pod \"glance-5b3e-account-create-update-s689x\" (UID: \"63315531-f260-4b52-ad96-ea4d24185d13\") " pod="openstack/glance-5b3e-account-create-update-s689x" Dec 10 12:34:05 crc kubenswrapper[4689]: I1210 12:34:05.183704 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63315531-f260-4b52-ad96-ea4d24185d13-operator-scripts\") pod \"glance-5b3e-account-create-update-s689x\" (UID: \"63315531-f260-4b52-ad96-ea4d24185d13\") " pod="openstack/glance-5b3e-account-create-update-s689x" Dec 10 12:34:05 crc kubenswrapper[4689]: I1210 12:34:05.200721 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhtvv\" (UniqueName: \"kubernetes.io/projected/63315531-f260-4b52-ad96-ea4d24185d13-kube-api-access-fhtvv\") pod \"glance-5b3e-account-create-update-s689x\" (UID: \"63315531-f260-4b52-ad96-ea4d24185d13\") " pod="openstack/glance-5b3e-account-create-update-s689x" Dec 10 12:34:05 crc kubenswrapper[4689]: I1210 12:34:05.284825 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1ed5623-54eb-4955-800d-273d08df144a-operator-scripts\") pod \"glance-db-create-m9tln\" (UID: \"c1ed5623-54eb-4955-800d-273d08df144a\") " pod="openstack/glance-db-create-m9tln" Dec 10 12:34:05 crc kubenswrapper[4689]: I1210 12:34:05.284898 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m22ql\" (UniqueName: \"kubernetes.io/projected/c1ed5623-54eb-4955-800d-273d08df144a-kube-api-access-m22ql\") pod \"glance-db-create-m9tln\" (UID: \"c1ed5623-54eb-4955-800d-273d08df144a\") " pod="openstack/glance-db-create-m9tln" Dec 10 12:34:05 crc kubenswrapper[4689]: I1210 12:34:05.286331 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1ed5623-54eb-4955-800d-273d08df144a-operator-scripts\") pod \"glance-db-create-m9tln\" (UID: \"c1ed5623-54eb-4955-800d-273d08df144a\") " pod="openstack/glance-db-create-m9tln" Dec 10 12:34:05 crc kubenswrapper[4689]: I1210 12:34:05.307144 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m22ql\" (UniqueName: \"kubernetes.io/projected/c1ed5623-54eb-4955-800d-273d08df144a-kube-api-access-m22ql\") pod \"glance-db-create-m9tln\" (UID: \"c1ed5623-54eb-4955-800d-273d08df144a\") " pod="openstack/glance-db-create-m9tln" Dec 10 12:34:05 crc kubenswrapper[4689]: I1210 12:34:05.368004 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5b3e-account-create-update-s689x" Dec 10 12:34:05 crc kubenswrapper[4689]: I1210 12:34:05.404696 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-m9tln" Dec 10 12:34:05 crc kubenswrapper[4689]: I1210 12:34:05.876055 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-5b3e-account-create-update-s689x"] Dec 10 12:34:05 crc kubenswrapper[4689]: I1210 12:34:05.983685 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-m9tln"] Dec 10 12:34:05 crc kubenswrapper[4689]: W1210 12:34:05.990533 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1ed5623_54eb_4955_800d_273d08df144a.slice/crio-54c67d045d3b4d304121cea103ff84ffb907f7b8668331151afcad181831fece WatchSource:0}: Error finding container 54c67d045d3b4d304121cea103ff84ffb907f7b8668331151afcad181831fece: Status 404 returned error can't find the container with id 54c67d045d3b4d304121cea103ff84ffb907f7b8668331151afcad181831fece Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.186439 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5b3e-account-create-update-s689x" event={"ID":"63315531-f260-4b52-ad96-ea4d24185d13","Type":"ContainerStarted","Data":"b0b9531bb554fe5c472d19985ae9d3e7d8db240c2e2461b36beaad6931a1333d"} Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.186773 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5b3e-account-create-update-s689x" event={"ID":"63315531-f260-4b52-ad96-ea4d24185d13","Type":"ContainerStarted","Data":"6b88603c39f4c9d1f92fd4c382e0facfb97962ec0d4e4d339117b8c5323200bc"} Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.188410 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-m9tln" event={"ID":"c1ed5623-54eb-4955-800d-273d08df144a","Type":"ContainerStarted","Data":"6dc1a7557da0a3e61b7fa63fbb1d322d95962aa27ab79c266214f0b276052f2d"} Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.188459 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-m9tln" event={"ID":"c1ed5623-54eb-4955-800d-273d08df144a","Type":"ContainerStarted","Data":"54c67d045d3b4d304121cea103ff84ffb907f7b8668331151afcad181831fece"} Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.242718 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-5b3e-account-create-update-s689x" podStartSLOduration=1.242697481 podStartE2EDuration="1.242697481s" podCreationTimestamp="2025-12-10 12:34:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:34:06.21046609 +0000 UTC m=+1113.998547238" watchObservedRunningTime="2025-12-10 12:34:06.242697481 +0000 UTC m=+1114.030778619" Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.243055 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-m9tln" podStartSLOduration=1.243050429 podStartE2EDuration="1.243050429s" podCreationTimestamp="2025-12-10 12:34:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:34:06.240352322 +0000 UTC m=+1114.028433460" watchObservedRunningTime="2025-12-10 12:34:06.243050429 +0000 UTC m=+1114.031131567" Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.514349 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c4b54476-e438-46d8-b234-c8f661f5c26f-etc-swift\") pod \"swift-storage-0\" (UID: \"c4b54476-e438-46d8-b234-c8f661f5c26f\") " pod="openstack/swift-storage-0" Dec 10 12:34:06 crc kubenswrapper[4689]: E1210 12:34:06.514611 4689 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 10 12:34:06 crc kubenswrapper[4689]: E1210 12:34:06.514628 4689 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 10 12:34:06 crc kubenswrapper[4689]: E1210 12:34:06.514677 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c4b54476-e438-46d8-b234-c8f661f5c26f-etc-swift podName:c4b54476-e438-46d8-b234-c8f661f5c26f nodeName:}" failed. No retries permitted until 2025-12-10 12:34:10.514660372 +0000 UTC m=+1118.302741510 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c4b54476-e438-46d8-b234-c8f661f5c26f-etc-swift") pod "swift-storage-0" (UID: "c4b54476-e438-46d8-b234-c8f661f5c26f") : configmap "swift-ring-files" not found Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.673178 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-2mlkq"] Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.674647 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2mlkq" Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.676802 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.678028 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.678253 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.693262 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-2mlkq"] Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.730207 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-ring-data-devices\") pod \"swift-ring-rebalance-2mlkq\" (UID: \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\") " pod="openstack/swift-ring-rebalance-2mlkq" Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.730309 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-swiftconf\") pod \"swift-ring-rebalance-2mlkq\" (UID: \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\") " pod="openstack/swift-ring-rebalance-2mlkq" Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.730542 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-etc-swift\") pod \"swift-ring-rebalance-2mlkq\" (UID: \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\") " pod="openstack/swift-ring-rebalance-2mlkq" Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.730793 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg8nj\" (UniqueName: \"kubernetes.io/projected/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-kube-api-access-rg8nj\") pod \"swift-ring-rebalance-2mlkq\" (UID: \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\") " pod="openstack/swift-ring-rebalance-2mlkq" Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.730942 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-scripts\") pod \"swift-ring-rebalance-2mlkq\" (UID: \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\") " pod="openstack/swift-ring-rebalance-2mlkq" Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.731020 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-combined-ca-bundle\") pod \"swift-ring-rebalance-2mlkq\" (UID: \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\") " pod="openstack/swift-ring-rebalance-2mlkq" Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.731075 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-dispersionconf\") pod \"swift-ring-rebalance-2mlkq\" (UID: \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\") " pod="openstack/swift-ring-rebalance-2mlkq" Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.832742 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-swiftconf\") pod \"swift-ring-rebalance-2mlkq\" (UID: \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\") " pod="openstack/swift-ring-rebalance-2mlkq" Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.832807 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-etc-swift\") pod \"swift-ring-rebalance-2mlkq\" (UID: \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\") " pod="openstack/swift-ring-rebalance-2mlkq" Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.832857 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg8nj\" (UniqueName: \"kubernetes.io/projected/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-kube-api-access-rg8nj\") pod \"swift-ring-rebalance-2mlkq\" (UID: \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\") " pod="openstack/swift-ring-rebalance-2mlkq" Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.832888 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-scripts\") pod \"swift-ring-rebalance-2mlkq\" (UID: \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\") " pod="openstack/swift-ring-rebalance-2mlkq" Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.832916 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-combined-ca-bundle\") pod \"swift-ring-rebalance-2mlkq\" (UID: \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\") " pod="openstack/swift-ring-rebalance-2mlkq" Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.832940 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-dispersionconf\") pod \"swift-ring-rebalance-2mlkq\" (UID: \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\") " pod="openstack/swift-ring-rebalance-2mlkq" Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.833008 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-ring-data-devices\") pod \"swift-ring-rebalance-2mlkq\" (UID: \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\") " pod="openstack/swift-ring-rebalance-2mlkq" Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.833672 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-etc-swift\") pod \"swift-ring-rebalance-2mlkq\" (UID: \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\") " pod="openstack/swift-ring-rebalance-2mlkq" Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.834237 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-ring-data-devices\") pod \"swift-ring-rebalance-2mlkq\" (UID: \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\") " pod="openstack/swift-ring-rebalance-2mlkq" Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.834813 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-scripts\") pod \"swift-ring-rebalance-2mlkq\" (UID: \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\") " pod="openstack/swift-ring-rebalance-2mlkq" Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.839388 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-swiftconf\") pod \"swift-ring-rebalance-2mlkq\" (UID: \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\") " pod="openstack/swift-ring-rebalance-2mlkq" Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.839523 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-dispersionconf\") pod \"swift-ring-rebalance-2mlkq\" (UID: \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\") " pod="openstack/swift-ring-rebalance-2mlkq" Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.848471 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-combined-ca-bundle\") pod \"swift-ring-rebalance-2mlkq\" (UID: \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\") " pod="openstack/swift-ring-rebalance-2mlkq" Dec 10 12:34:06 crc kubenswrapper[4689]: I1210 12:34:06.862898 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg8nj\" (UniqueName: \"kubernetes.io/projected/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-kube-api-access-rg8nj\") pod \"swift-ring-rebalance-2mlkq\" (UID: \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\") " pod="openstack/swift-ring-rebalance-2mlkq" Dec 10 12:34:07 crc kubenswrapper[4689]: I1210 12:34:07.043744 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2mlkq" Dec 10 12:34:07 crc kubenswrapper[4689]: I1210 12:34:07.167078 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 12:34:07 crc kubenswrapper[4689]: I1210 12:34:07.167146 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 12:34:07 crc kubenswrapper[4689]: I1210 12:34:07.206627 4689 generic.go:334] "Generic (PLEG): container finished" podID="c1ed5623-54eb-4955-800d-273d08df144a" containerID="6dc1a7557da0a3e61b7fa63fbb1d322d95962aa27ab79c266214f0b276052f2d" exitCode=0 Dec 10 12:34:07 crc kubenswrapper[4689]: I1210 12:34:07.206899 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-m9tln" event={"ID":"c1ed5623-54eb-4955-800d-273d08df144a","Type":"ContainerDied","Data":"6dc1a7557da0a3e61b7fa63fbb1d322d95962aa27ab79c266214f0b276052f2d"} Dec 10 12:34:07 crc kubenswrapper[4689]: I1210 12:34:07.224780 4689 generic.go:334] "Generic (PLEG): container finished" podID="63315531-f260-4b52-ad96-ea4d24185d13" containerID="b0b9531bb554fe5c472d19985ae9d3e7d8db240c2e2461b36beaad6931a1333d" exitCode=0 Dec 10 12:34:07 crc kubenswrapper[4689]: I1210 12:34:07.224843 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5b3e-account-create-update-s689x" event={"ID":"63315531-f260-4b52-ad96-ea4d24185d13","Type":"ContainerDied","Data":"b0b9531bb554fe5c472d19985ae9d3e7d8db240c2e2461b36beaad6931a1333d"} Dec 10 12:34:07 crc kubenswrapper[4689]: I1210 12:34:07.491462 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-2mlkq"] Dec 10 12:34:08 crc kubenswrapper[4689]: I1210 12:34:08.233006 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2mlkq" event={"ID":"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd","Type":"ContainerStarted","Data":"63ed84cd8ecd7f9e54d924cf6e482e82335864f6248cb4d0ddc796c09fd2dbd4"} Dec 10 12:34:08 crc kubenswrapper[4689]: I1210 12:34:08.643028 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5b3e-account-create-update-s689x" Dec 10 12:34:08 crc kubenswrapper[4689]: I1210 12:34:08.649679 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-m9tln" Dec 10 12:34:08 crc kubenswrapper[4689]: I1210 12:34:08.771612 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1ed5623-54eb-4955-800d-273d08df144a-operator-scripts\") pod \"c1ed5623-54eb-4955-800d-273d08df144a\" (UID: \"c1ed5623-54eb-4955-800d-273d08df144a\") " Dec 10 12:34:08 crc kubenswrapper[4689]: I1210 12:34:08.771664 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhtvv\" (UniqueName: \"kubernetes.io/projected/63315531-f260-4b52-ad96-ea4d24185d13-kube-api-access-fhtvv\") pod \"63315531-f260-4b52-ad96-ea4d24185d13\" (UID: \"63315531-f260-4b52-ad96-ea4d24185d13\") " Dec 10 12:34:08 crc kubenswrapper[4689]: I1210 12:34:08.771754 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63315531-f260-4b52-ad96-ea4d24185d13-operator-scripts\") pod \"63315531-f260-4b52-ad96-ea4d24185d13\" (UID: \"63315531-f260-4b52-ad96-ea4d24185d13\") " Dec 10 12:34:08 crc kubenswrapper[4689]: I1210 12:34:08.771794 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m22ql\" (UniqueName: \"kubernetes.io/projected/c1ed5623-54eb-4955-800d-273d08df144a-kube-api-access-m22ql\") pod \"c1ed5623-54eb-4955-800d-273d08df144a\" (UID: \"c1ed5623-54eb-4955-800d-273d08df144a\") " Dec 10 12:34:08 crc kubenswrapper[4689]: I1210 12:34:08.772551 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1ed5623-54eb-4955-800d-273d08df144a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c1ed5623-54eb-4955-800d-273d08df144a" (UID: "c1ed5623-54eb-4955-800d-273d08df144a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:08 crc kubenswrapper[4689]: I1210 12:34:08.774080 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63315531-f260-4b52-ad96-ea4d24185d13-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "63315531-f260-4b52-ad96-ea4d24185d13" (UID: "63315531-f260-4b52-ad96-ea4d24185d13"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:08 crc kubenswrapper[4689]: I1210 12:34:08.777649 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1ed5623-54eb-4955-800d-273d08df144a-kube-api-access-m22ql" (OuterVolumeSpecName: "kube-api-access-m22ql") pod "c1ed5623-54eb-4955-800d-273d08df144a" (UID: "c1ed5623-54eb-4955-800d-273d08df144a"). InnerVolumeSpecName "kube-api-access-m22ql". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:34:08 crc kubenswrapper[4689]: I1210 12:34:08.779662 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63315531-f260-4b52-ad96-ea4d24185d13-kube-api-access-fhtvv" (OuterVolumeSpecName: "kube-api-access-fhtvv") pod "63315531-f260-4b52-ad96-ea4d24185d13" (UID: "63315531-f260-4b52-ad96-ea4d24185d13"). InnerVolumeSpecName "kube-api-access-fhtvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:34:08 crc kubenswrapper[4689]: I1210 12:34:08.873393 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1ed5623-54eb-4955-800d-273d08df144a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:08 crc kubenswrapper[4689]: I1210 12:34:08.873424 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhtvv\" (UniqueName: \"kubernetes.io/projected/63315531-f260-4b52-ad96-ea4d24185d13-kube-api-access-fhtvv\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:08 crc kubenswrapper[4689]: I1210 12:34:08.873435 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63315531-f260-4b52-ad96-ea4d24185d13-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:08 crc kubenswrapper[4689]: I1210 12:34:08.873445 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m22ql\" (UniqueName: \"kubernetes.io/projected/c1ed5623-54eb-4955-800d-273d08df144a-kube-api-access-m22ql\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.248058 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-m9tln" event={"ID":"c1ed5623-54eb-4955-800d-273d08df144a","Type":"ContainerDied","Data":"54c67d045d3b4d304121cea103ff84ffb907f7b8668331151afcad181831fece"} Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.248112 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54c67d045d3b4d304121cea103ff84ffb907f7b8668331151afcad181831fece" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.248112 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-m9tln" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.251444 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5b3e-account-create-update-s689x" event={"ID":"63315531-f260-4b52-ad96-ea4d24185d13","Type":"ContainerDied","Data":"6b88603c39f4c9d1f92fd4c382e0facfb97962ec0d4e4d339117b8c5323200bc"} Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.251484 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b88603c39f4c9d1f92fd4c382e0facfb97962ec0d4e4d339117b8c5323200bc" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.251529 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5b3e-account-create-update-s689x" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.385872 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-jkpzl"] Dec 10 12:34:09 crc kubenswrapper[4689]: E1210 12:34:09.386301 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63315531-f260-4b52-ad96-ea4d24185d13" containerName="mariadb-account-create-update" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.386319 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="63315531-f260-4b52-ad96-ea4d24185d13" containerName="mariadb-account-create-update" Dec 10 12:34:09 crc kubenswrapper[4689]: E1210 12:34:09.386339 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ed5623-54eb-4955-800d-273d08df144a" containerName="mariadb-database-create" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.386347 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ed5623-54eb-4955-800d-273d08df144a" containerName="mariadb-database-create" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.386533 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1ed5623-54eb-4955-800d-273d08df144a" containerName="mariadb-database-create" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.386564 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="63315531-f260-4b52-ad96-ea4d24185d13" containerName="mariadb-account-create-update" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.387190 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jkpzl" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.392848 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jkpzl"] Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.484748 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed-operator-scripts\") pod \"keystone-db-create-jkpzl\" (UID: \"8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed\") " pod="openstack/keystone-db-create-jkpzl" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.484854 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qkmb\" (UniqueName: \"kubernetes.io/projected/8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed-kube-api-access-4qkmb\") pod \"keystone-db-create-jkpzl\" (UID: \"8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed\") " pod="openstack/keystone-db-create-jkpzl" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.529673 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5bcc-account-create-update-c6df9"] Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.531722 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5bcc-account-create-update-c6df9" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.534431 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.539936 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5bcc-account-create-update-c6df9"] Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.586286 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qkmb\" (UniqueName: \"kubernetes.io/projected/8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed-kube-api-access-4qkmb\") pod \"keystone-db-create-jkpzl\" (UID: \"8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed\") " pod="openstack/keystone-db-create-jkpzl" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.586343 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6mg7\" (UniqueName: \"kubernetes.io/projected/7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c-kube-api-access-t6mg7\") pod \"keystone-5bcc-account-create-update-c6df9\" (UID: \"7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c\") " pod="openstack/keystone-5bcc-account-create-update-c6df9" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.586412 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed-operator-scripts\") pod \"keystone-db-create-jkpzl\" (UID: \"8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed\") " pod="openstack/keystone-db-create-jkpzl" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.586459 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c-operator-scripts\") pod \"keystone-5bcc-account-create-update-c6df9\" (UID: \"7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c\") " pod="openstack/keystone-5bcc-account-create-update-c6df9" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.587219 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed-operator-scripts\") pod \"keystone-db-create-jkpzl\" (UID: \"8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed\") " pod="openstack/keystone-db-create-jkpzl" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.607096 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qkmb\" (UniqueName: \"kubernetes.io/projected/8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed-kube-api-access-4qkmb\") pod \"keystone-db-create-jkpzl\" (UID: \"8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed\") " pod="openstack/keystone-db-create-jkpzl" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.687657 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6mg7\" (UniqueName: \"kubernetes.io/projected/7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c-kube-api-access-t6mg7\") pod \"keystone-5bcc-account-create-update-c6df9\" (UID: \"7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c\") " pod="openstack/keystone-5bcc-account-create-update-c6df9" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.688072 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c-operator-scripts\") pod \"keystone-5bcc-account-create-update-c6df9\" (UID: \"7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c\") " pod="openstack/keystone-5bcc-account-create-update-c6df9" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.689230 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c-operator-scripts\") pod \"keystone-5bcc-account-create-update-c6df9\" (UID: \"7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c\") " pod="openstack/keystone-5bcc-account-create-update-c6df9" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.706763 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6mg7\" (UniqueName: \"kubernetes.io/projected/7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c-kube-api-access-t6mg7\") pod \"keystone-5bcc-account-create-update-c6df9\" (UID: \"7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c\") " pod="openstack/keystone-5bcc-account-create-update-c6df9" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.715244 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jkpzl" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.725317 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-v4mw7"] Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.728604 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v4mw7" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.738408 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-78a1-account-create-update-5l2cc"] Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.739628 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78a1-account-create-update-5l2cc" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.741082 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.746765 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-v4mw7"] Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.762109 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-78a1-account-create-update-5l2cc"] Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.790055 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6369f699-9491-404b-beab-9bb964b73037-operator-scripts\") pod \"placement-78a1-account-create-update-5l2cc\" (UID: \"6369f699-9491-404b-beab-9bb964b73037\") " pod="openstack/placement-78a1-account-create-update-5l2cc" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.790224 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2ksp\" (UniqueName: \"kubernetes.io/projected/6dcd6604-d6d4-4147-ab88-eefb780a33b4-kube-api-access-v2ksp\") pod \"placement-db-create-v4mw7\" (UID: \"6dcd6604-d6d4-4147-ab88-eefb780a33b4\") " pod="openstack/placement-db-create-v4mw7" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.790479 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dcd6604-d6d4-4147-ab88-eefb780a33b4-operator-scripts\") pod \"placement-db-create-v4mw7\" (UID: \"6dcd6604-d6d4-4147-ab88-eefb780a33b4\") " pod="openstack/placement-db-create-v4mw7" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.790665 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8522t\" (UniqueName: \"kubernetes.io/projected/6369f699-9491-404b-beab-9bb964b73037-kube-api-access-8522t\") pod \"placement-78a1-account-create-update-5l2cc\" (UID: \"6369f699-9491-404b-beab-9bb964b73037\") " pod="openstack/placement-78a1-account-create-update-5l2cc" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.849512 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5bcc-account-create-update-c6df9" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.892659 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dcd6604-d6d4-4147-ab88-eefb780a33b4-operator-scripts\") pod \"placement-db-create-v4mw7\" (UID: \"6dcd6604-d6d4-4147-ab88-eefb780a33b4\") " pod="openstack/placement-db-create-v4mw7" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.892746 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8522t\" (UniqueName: \"kubernetes.io/projected/6369f699-9491-404b-beab-9bb964b73037-kube-api-access-8522t\") pod \"placement-78a1-account-create-update-5l2cc\" (UID: \"6369f699-9491-404b-beab-9bb964b73037\") " pod="openstack/placement-78a1-account-create-update-5l2cc" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.892848 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6369f699-9491-404b-beab-9bb964b73037-operator-scripts\") pod \"placement-78a1-account-create-update-5l2cc\" (UID: \"6369f699-9491-404b-beab-9bb964b73037\") " pod="openstack/placement-78a1-account-create-update-5l2cc" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.892901 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2ksp\" (UniqueName: \"kubernetes.io/projected/6dcd6604-d6d4-4147-ab88-eefb780a33b4-kube-api-access-v2ksp\") pod \"placement-db-create-v4mw7\" (UID: \"6dcd6604-d6d4-4147-ab88-eefb780a33b4\") " pod="openstack/placement-db-create-v4mw7" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.893472 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dcd6604-d6d4-4147-ab88-eefb780a33b4-operator-scripts\") pod \"placement-db-create-v4mw7\" (UID: \"6dcd6604-d6d4-4147-ab88-eefb780a33b4\") " pod="openstack/placement-db-create-v4mw7" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.893719 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6369f699-9491-404b-beab-9bb964b73037-operator-scripts\") pod \"placement-78a1-account-create-update-5l2cc\" (UID: \"6369f699-9491-404b-beab-9bb964b73037\") " pod="openstack/placement-78a1-account-create-update-5l2cc" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.909132 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2ksp\" (UniqueName: \"kubernetes.io/projected/6dcd6604-d6d4-4147-ab88-eefb780a33b4-kube-api-access-v2ksp\") pod \"placement-db-create-v4mw7\" (UID: \"6dcd6604-d6d4-4147-ab88-eefb780a33b4\") " pod="openstack/placement-db-create-v4mw7" Dec 10 12:34:09 crc kubenswrapper[4689]: I1210 12:34:09.927577 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8522t\" (UniqueName: \"kubernetes.io/projected/6369f699-9491-404b-beab-9bb964b73037-kube-api-access-8522t\") pod \"placement-78a1-account-create-update-5l2cc\" (UID: \"6369f699-9491-404b-beab-9bb964b73037\") " pod="openstack/placement-78a1-account-create-update-5l2cc" Dec 10 12:34:10 crc kubenswrapper[4689]: I1210 12:34:10.068347 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v4mw7" Dec 10 12:34:10 crc kubenswrapper[4689]: I1210 12:34:10.076735 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78a1-account-create-update-5l2cc" Dec 10 12:34:10 crc kubenswrapper[4689]: I1210 12:34:10.276671 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-44zkc"] Dec 10 12:34:10 crc kubenswrapper[4689]: I1210 12:34:10.277906 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-44zkc" Dec 10 12:34:10 crc kubenswrapper[4689]: I1210 12:34:10.285869 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-drvxw" Dec 10 12:34:10 crc kubenswrapper[4689]: I1210 12:34:10.286081 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 10 12:34:10 crc kubenswrapper[4689]: I1210 12:34:10.309485 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-44zkc"] Dec 10 12:34:10 crc kubenswrapper[4689]: I1210 12:34:10.404244 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18511d72-4b0d-401d-aa20-6cbf2b26abc6-combined-ca-bundle\") pod \"glance-db-sync-44zkc\" (UID: \"18511d72-4b0d-401d-aa20-6cbf2b26abc6\") " pod="openstack/glance-db-sync-44zkc" Dec 10 12:34:10 crc kubenswrapper[4689]: I1210 12:34:10.404294 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwl6s\" (UniqueName: \"kubernetes.io/projected/18511d72-4b0d-401d-aa20-6cbf2b26abc6-kube-api-access-fwl6s\") pod \"glance-db-sync-44zkc\" (UID: \"18511d72-4b0d-401d-aa20-6cbf2b26abc6\") " pod="openstack/glance-db-sync-44zkc" Dec 10 12:34:10 crc kubenswrapper[4689]: I1210 12:34:10.404375 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18511d72-4b0d-401d-aa20-6cbf2b26abc6-config-data\") pod \"glance-db-sync-44zkc\" (UID: \"18511d72-4b0d-401d-aa20-6cbf2b26abc6\") " pod="openstack/glance-db-sync-44zkc" Dec 10 12:34:10 crc kubenswrapper[4689]: I1210 12:34:10.404442 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/18511d72-4b0d-401d-aa20-6cbf2b26abc6-db-sync-config-data\") pod \"glance-db-sync-44zkc\" (UID: \"18511d72-4b0d-401d-aa20-6cbf2b26abc6\") " pod="openstack/glance-db-sync-44zkc" Dec 10 12:34:10 crc kubenswrapper[4689]: I1210 12:34:10.506587 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/18511d72-4b0d-401d-aa20-6cbf2b26abc6-db-sync-config-data\") pod \"glance-db-sync-44zkc\" (UID: \"18511d72-4b0d-401d-aa20-6cbf2b26abc6\") " pod="openstack/glance-db-sync-44zkc" Dec 10 12:34:10 crc kubenswrapper[4689]: I1210 12:34:10.506672 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18511d72-4b0d-401d-aa20-6cbf2b26abc6-combined-ca-bundle\") pod \"glance-db-sync-44zkc\" (UID: \"18511d72-4b0d-401d-aa20-6cbf2b26abc6\") " pod="openstack/glance-db-sync-44zkc" Dec 10 12:34:10 crc kubenswrapper[4689]: I1210 12:34:10.506700 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwl6s\" (UniqueName: \"kubernetes.io/projected/18511d72-4b0d-401d-aa20-6cbf2b26abc6-kube-api-access-fwl6s\") pod \"glance-db-sync-44zkc\" (UID: \"18511d72-4b0d-401d-aa20-6cbf2b26abc6\") " pod="openstack/glance-db-sync-44zkc" Dec 10 12:34:10 crc kubenswrapper[4689]: I1210 12:34:10.506772 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18511d72-4b0d-401d-aa20-6cbf2b26abc6-config-data\") pod \"glance-db-sync-44zkc\" (UID: \"18511d72-4b0d-401d-aa20-6cbf2b26abc6\") " pod="openstack/glance-db-sync-44zkc" Dec 10 12:34:10 crc kubenswrapper[4689]: I1210 12:34:10.512717 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/18511d72-4b0d-401d-aa20-6cbf2b26abc6-db-sync-config-data\") pod \"glance-db-sync-44zkc\" (UID: \"18511d72-4b0d-401d-aa20-6cbf2b26abc6\") " pod="openstack/glance-db-sync-44zkc" Dec 10 12:34:10 crc kubenswrapper[4689]: I1210 12:34:10.512748 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18511d72-4b0d-401d-aa20-6cbf2b26abc6-config-data\") pod \"glance-db-sync-44zkc\" (UID: \"18511d72-4b0d-401d-aa20-6cbf2b26abc6\") " pod="openstack/glance-db-sync-44zkc" Dec 10 12:34:10 crc kubenswrapper[4689]: I1210 12:34:10.524951 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18511d72-4b0d-401d-aa20-6cbf2b26abc6-combined-ca-bundle\") pod \"glance-db-sync-44zkc\" (UID: \"18511d72-4b0d-401d-aa20-6cbf2b26abc6\") " pod="openstack/glance-db-sync-44zkc" Dec 10 12:34:10 crc kubenswrapper[4689]: I1210 12:34:10.525430 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwl6s\" (UniqueName: \"kubernetes.io/projected/18511d72-4b0d-401d-aa20-6cbf2b26abc6-kube-api-access-fwl6s\") pod \"glance-db-sync-44zkc\" (UID: \"18511d72-4b0d-401d-aa20-6cbf2b26abc6\") " pod="openstack/glance-db-sync-44zkc" Dec 10 12:34:10 crc kubenswrapper[4689]: I1210 12:34:10.598926 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-44zkc" Dec 10 12:34:10 crc kubenswrapper[4689]: I1210 12:34:10.608571 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c4b54476-e438-46d8-b234-c8f661f5c26f-etc-swift\") pod \"swift-storage-0\" (UID: \"c4b54476-e438-46d8-b234-c8f661f5c26f\") " pod="openstack/swift-storage-0" Dec 10 12:34:10 crc kubenswrapper[4689]: E1210 12:34:10.608768 4689 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 10 12:34:10 crc kubenswrapper[4689]: E1210 12:34:10.608796 4689 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 10 12:34:10 crc kubenswrapper[4689]: E1210 12:34:10.608866 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c4b54476-e438-46d8-b234-c8f661f5c26f-etc-swift podName:c4b54476-e438-46d8-b234-c8f661f5c26f nodeName:}" failed. No retries permitted until 2025-12-10 12:34:18.60884644 +0000 UTC m=+1126.396927658 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c4b54476-e438-46d8-b234-c8f661f5c26f-etc-swift") pod "swift-storage-0" (UID: "c4b54476-e438-46d8-b234-c8f661f5c26f") : configmap "swift-ring-files" not found Dec 10 12:34:11 crc kubenswrapper[4689]: I1210 12:34:11.767243 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jkpzl"] Dec 10 12:34:11 crc kubenswrapper[4689]: I1210 12:34:11.791993 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-v4mw7"] Dec 10 12:34:11 crc kubenswrapper[4689]: I1210 12:34:11.794195 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 10 12:34:11 crc kubenswrapper[4689]: I1210 12:34:11.925646 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-78a1-account-create-update-5l2cc"] Dec 10 12:34:11 crc kubenswrapper[4689]: I1210 12:34:11.940119 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5bcc-account-create-update-c6df9"] Dec 10 12:34:11 crc kubenswrapper[4689]: W1210 12:34:11.943322 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6369f699_9491_404b_beab_9bb964b73037.slice/crio-b52cce32ff43f4b1e95646de9647a79f353eed1caf3cbb4c212d161ecaa86fd7 WatchSource:0}: Error finding container b52cce32ff43f4b1e95646de9647a79f353eed1caf3cbb4c212d161ecaa86fd7: Status 404 returned error can't find the container with id b52cce32ff43f4b1e95646de9647a79f353eed1caf3cbb4c212d161ecaa86fd7 Dec 10 12:34:11 crc kubenswrapper[4689]: I1210 12:34:11.969185 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-xrcb2" Dec 10 12:34:12 crc kubenswrapper[4689]: I1210 12:34:12.057336 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-69fpd"] Dec 10 12:34:12 crc kubenswrapper[4689]: I1210 12:34:12.058044 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-69fpd" podUID="7934f7f2-2d11-43a1-8a79-002d383f8c34" containerName="dnsmasq-dns" containerID="cri-o://c3994f58b65480670016581afb3d2142a0ab48047fb7a63af7eb2ad7c643d74c" gracePeriod=10 Dec 10 12:34:12 crc kubenswrapper[4689]: I1210 12:34:12.095914 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-44zkc"] Dec 10 12:34:12 crc kubenswrapper[4689]: I1210 12:34:12.279779 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2mlkq" event={"ID":"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd","Type":"ContainerStarted","Data":"48777b0738c13c4c24711f6972bd750fc3be1d1640f8c54bf98bd7507acca621"} Dec 10 12:34:12 crc kubenswrapper[4689]: I1210 12:34:12.297079 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jkpzl" event={"ID":"8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed","Type":"ContainerStarted","Data":"24701f6ac8b863177de073f38488473596a039970623ed8cc5a548dcf2680c48"} Dec 10 12:34:12 crc kubenswrapper[4689]: I1210 12:34:12.297120 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jkpzl" event={"ID":"8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed","Type":"ContainerStarted","Data":"5081b54462f653f8ac71e126d0e3ea12e58840adfe0b6ad2782e7bef5226162d"} Dec 10 12:34:12 crc kubenswrapper[4689]: I1210 12:34:12.299390 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78a1-account-create-update-5l2cc" event={"ID":"6369f699-9491-404b-beab-9bb964b73037","Type":"ContainerStarted","Data":"7066a13b2bd8f18a60b91cb716c51d10f236e4d4f171341e012ff7beacce10d0"} Dec 10 12:34:12 crc kubenswrapper[4689]: I1210 12:34:12.299430 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78a1-account-create-update-5l2cc" event={"ID":"6369f699-9491-404b-beab-9bb964b73037","Type":"ContainerStarted","Data":"b52cce32ff43f4b1e95646de9647a79f353eed1caf3cbb4c212d161ecaa86fd7"} Dec 10 12:34:12 crc kubenswrapper[4689]: I1210 12:34:12.307735 4689 generic.go:334] "Generic (PLEG): container finished" podID="7934f7f2-2d11-43a1-8a79-002d383f8c34" containerID="c3994f58b65480670016581afb3d2142a0ab48047fb7a63af7eb2ad7c643d74c" exitCode=0 Dec 10 12:34:12 crc kubenswrapper[4689]: I1210 12:34:12.307836 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-69fpd" event={"ID":"7934f7f2-2d11-43a1-8a79-002d383f8c34","Type":"ContainerDied","Data":"c3994f58b65480670016581afb3d2142a0ab48047fb7a63af7eb2ad7c643d74c"} Dec 10 12:34:12 crc kubenswrapper[4689]: I1210 12:34:12.311099 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-2mlkq" podStartSLOduration=2.591390146 podStartE2EDuration="6.311079558s" podCreationTimestamp="2025-12-10 12:34:06 +0000 UTC" firstStartedPulling="2025-12-10 12:34:07.49910399 +0000 UTC m=+1115.287185128" lastFinishedPulling="2025-12-10 12:34:11.218793402 +0000 UTC m=+1119.006874540" observedRunningTime="2025-12-10 12:34:12.301741825 +0000 UTC m=+1120.089822963" watchObservedRunningTime="2025-12-10 12:34:12.311079558 +0000 UTC m=+1120.099160696" Dec 10 12:34:12 crc kubenswrapper[4689]: I1210 12:34:12.312915 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-44zkc" event={"ID":"18511d72-4b0d-401d-aa20-6cbf2b26abc6","Type":"ContainerStarted","Data":"ad3101834194c8315d67eff09834c9cbf91b80fbd38b4672febc3e23d079fdfb"} Dec 10 12:34:12 crc kubenswrapper[4689]: I1210 12:34:12.318452 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-78a1-account-create-update-5l2cc" podStartSLOduration=3.31843227 podStartE2EDuration="3.31843227s" podCreationTimestamp="2025-12-10 12:34:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:34:12.315452676 +0000 UTC m=+1120.103533814" watchObservedRunningTime="2025-12-10 12:34:12.31843227 +0000 UTC m=+1120.106513408" Dec 10 12:34:12 crc kubenswrapper[4689]: I1210 12:34:12.324359 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5bcc-account-create-update-c6df9" event={"ID":"7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c","Type":"ContainerStarted","Data":"d015a13979549958c203081f1fbd0743fdca07f838d191a9650aa288e63bf049"} Dec 10 12:34:12 crc kubenswrapper[4689]: I1210 12:34:12.324408 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5bcc-account-create-update-c6df9" event={"ID":"7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c","Type":"ContainerStarted","Data":"8bf524dcbe7fa1d6af330c4aec4c30042d7ad62cb12d4748f927488785e70faa"} Dec 10 12:34:12 crc kubenswrapper[4689]: I1210 12:34:12.332534 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-jkpzl" podStartSLOduration=3.332515179 podStartE2EDuration="3.332515179s" podCreationTimestamp="2025-12-10 12:34:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:34:12.331872054 +0000 UTC m=+1120.119953192" watchObservedRunningTime="2025-12-10 12:34:12.332515179 +0000 UTC m=+1120.120596317" Dec 10 12:34:12 crc kubenswrapper[4689]: I1210 12:34:12.334114 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v4mw7" event={"ID":"6dcd6604-d6d4-4147-ab88-eefb780a33b4","Type":"ContainerStarted","Data":"634bc2a679f5fc4e6cb4c88d24e3c29578e9708775c4804f6b82f1d2676f2e78"} Dec 10 12:34:12 crc kubenswrapper[4689]: I1210 12:34:12.334164 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v4mw7" event={"ID":"6dcd6604-d6d4-4147-ab88-eefb780a33b4","Type":"ContainerStarted","Data":"41916ad6c6c534005ee1c766cc567e87dd5c6188d5c8e2c5f29ff56b37b52c88"} Dec 10 12:34:12 crc kubenswrapper[4689]: I1210 12:34:12.357362 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5bcc-account-create-update-c6df9" podStartSLOduration=3.357344466 podStartE2EDuration="3.357344466s" podCreationTimestamp="2025-12-10 12:34:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:34:12.348844435 +0000 UTC m=+1120.136925573" watchObservedRunningTime="2025-12-10 12:34:12.357344466 +0000 UTC m=+1120.145425594" Dec 10 12:34:12 crc kubenswrapper[4689]: I1210 12:34:12.370931 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-v4mw7" podStartSLOduration=3.370912023 podStartE2EDuration="3.370912023s" podCreationTimestamp="2025-12-10 12:34:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:34:12.364952465 +0000 UTC m=+1120.153033603" watchObservedRunningTime="2025-12-10 12:34:12.370912023 +0000 UTC m=+1120.158993161" Dec 10 12:34:12 crc kubenswrapper[4689]: I1210 12:34:12.683923 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-69fpd" Dec 10 12:34:12 crc kubenswrapper[4689]: I1210 12:34:12.755268 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7934f7f2-2d11-43a1-8a79-002d383f8c34-dns-svc\") pod \"7934f7f2-2d11-43a1-8a79-002d383f8c34\" (UID: \"7934f7f2-2d11-43a1-8a79-002d383f8c34\") " Dec 10 12:34:12 crc kubenswrapper[4689]: I1210 12:34:12.755371 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw972\" (UniqueName: \"kubernetes.io/projected/7934f7f2-2d11-43a1-8a79-002d383f8c34-kube-api-access-gw972\") pod \"7934f7f2-2d11-43a1-8a79-002d383f8c34\" (UID: \"7934f7f2-2d11-43a1-8a79-002d383f8c34\") " Dec 10 12:34:12 crc kubenswrapper[4689]: I1210 12:34:12.755438 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7934f7f2-2d11-43a1-8a79-002d383f8c34-config\") pod \"7934f7f2-2d11-43a1-8a79-002d383f8c34\" (UID: \"7934f7f2-2d11-43a1-8a79-002d383f8c34\") " Dec 10 12:34:12 crc kubenswrapper[4689]: I1210 12:34:12.763508 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7934f7f2-2d11-43a1-8a79-002d383f8c34-kube-api-access-gw972" (OuterVolumeSpecName: "kube-api-access-gw972") pod "7934f7f2-2d11-43a1-8a79-002d383f8c34" (UID: "7934f7f2-2d11-43a1-8a79-002d383f8c34"). InnerVolumeSpecName "kube-api-access-gw972". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:34:12 crc kubenswrapper[4689]: I1210 12:34:12.809324 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7934f7f2-2d11-43a1-8a79-002d383f8c34-config" (OuterVolumeSpecName: "config") pod "7934f7f2-2d11-43a1-8a79-002d383f8c34" (UID: "7934f7f2-2d11-43a1-8a79-002d383f8c34"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:12 crc kubenswrapper[4689]: I1210 12:34:12.823922 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7934f7f2-2d11-43a1-8a79-002d383f8c34-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7934f7f2-2d11-43a1-8a79-002d383f8c34" (UID: "7934f7f2-2d11-43a1-8a79-002d383f8c34"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:12 crc kubenswrapper[4689]: I1210 12:34:12.861244 4689 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7934f7f2-2d11-43a1-8a79-002d383f8c34-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:12 crc kubenswrapper[4689]: I1210 12:34:12.861281 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gw972\" (UniqueName: \"kubernetes.io/projected/7934f7f2-2d11-43a1-8a79-002d383f8c34-kube-api-access-gw972\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:12 crc kubenswrapper[4689]: I1210 12:34:12.861292 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7934f7f2-2d11-43a1-8a79-002d383f8c34-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:13 crc kubenswrapper[4689]: I1210 12:34:13.343719 4689 generic.go:334] "Generic (PLEG): container finished" podID="8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed" containerID="24701f6ac8b863177de073f38488473596a039970623ed8cc5a548dcf2680c48" exitCode=0 Dec 10 12:34:13 crc kubenswrapper[4689]: I1210 12:34:13.343793 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jkpzl" event={"ID":"8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed","Type":"ContainerDied","Data":"24701f6ac8b863177de073f38488473596a039970623ed8cc5a548dcf2680c48"} Dec 10 12:34:13 crc kubenswrapper[4689]: I1210 12:34:13.345780 4689 generic.go:334] "Generic (PLEG): container finished" podID="6369f699-9491-404b-beab-9bb964b73037" containerID="7066a13b2bd8f18a60b91cb716c51d10f236e4d4f171341e012ff7beacce10d0" exitCode=0 Dec 10 12:34:13 crc kubenswrapper[4689]: I1210 12:34:13.345850 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78a1-account-create-update-5l2cc" event={"ID":"6369f699-9491-404b-beab-9bb964b73037","Type":"ContainerDied","Data":"7066a13b2bd8f18a60b91cb716c51d10f236e4d4f171341e012ff7beacce10d0"} Dec 10 12:34:13 crc kubenswrapper[4689]: I1210 12:34:13.347585 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-69fpd" event={"ID":"7934f7f2-2d11-43a1-8a79-002d383f8c34","Type":"ContainerDied","Data":"e9db4cfd3079d04c68cd62a76fee585cac44f5d23f04b3688374f9cb471f0ef6"} Dec 10 12:34:13 crc kubenswrapper[4689]: I1210 12:34:13.347615 4689 scope.go:117] "RemoveContainer" containerID="c3994f58b65480670016581afb3d2142a0ab48047fb7a63af7eb2ad7c643d74c" Dec 10 12:34:13 crc kubenswrapper[4689]: I1210 12:34:13.347692 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-69fpd" Dec 10 12:34:13 crc kubenswrapper[4689]: I1210 12:34:13.353390 4689 generic.go:334] "Generic (PLEG): container finished" podID="7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c" containerID="d015a13979549958c203081f1fbd0743fdca07f838d191a9650aa288e63bf049" exitCode=0 Dec 10 12:34:13 crc kubenswrapper[4689]: I1210 12:34:13.353417 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5bcc-account-create-update-c6df9" event={"ID":"7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c","Type":"ContainerDied","Data":"d015a13979549958c203081f1fbd0743fdca07f838d191a9650aa288e63bf049"} Dec 10 12:34:13 crc kubenswrapper[4689]: I1210 12:34:13.355633 4689 generic.go:334] "Generic (PLEG): container finished" podID="6dcd6604-d6d4-4147-ab88-eefb780a33b4" containerID="634bc2a679f5fc4e6cb4c88d24e3c29578e9708775c4804f6b82f1d2676f2e78" exitCode=0 Dec 10 12:34:13 crc kubenswrapper[4689]: I1210 12:34:13.356611 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v4mw7" event={"ID":"6dcd6604-d6d4-4147-ab88-eefb780a33b4","Type":"ContainerDied","Data":"634bc2a679f5fc4e6cb4c88d24e3c29578e9708775c4804f6b82f1d2676f2e78"} Dec 10 12:34:13 crc kubenswrapper[4689]: I1210 12:34:13.374113 4689 scope.go:117] "RemoveContainer" containerID="47ffb9dca172c946792a6b459bc7d8d1499c53fd7b8e537a77c3b24fa8e7adbd" Dec 10 12:34:13 crc kubenswrapper[4689]: I1210 12:34:13.436432 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-69fpd"] Dec 10 12:34:13 crc kubenswrapper[4689]: I1210 12:34:13.445661 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-69fpd"] Dec 10 12:34:14 crc kubenswrapper[4689]: I1210 12:34:14.530417 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7934f7f2-2d11-43a1-8a79-002d383f8c34" path="/var/lib/kubelet/pods/7934f7f2-2d11-43a1-8a79-002d383f8c34/volumes" Dec 10 12:34:14 crc kubenswrapper[4689]: I1210 12:34:14.765915 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v4mw7" Dec 10 12:34:14 crc kubenswrapper[4689]: I1210 12:34:14.933559 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2ksp\" (UniqueName: \"kubernetes.io/projected/6dcd6604-d6d4-4147-ab88-eefb780a33b4-kube-api-access-v2ksp\") pod \"6dcd6604-d6d4-4147-ab88-eefb780a33b4\" (UID: \"6dcd6604-d6d4-4147-ab88-eefb780a33b4\") " Dec 10 12:34:14 crc kubenswrapper[4689]: I1210 12:34:14.933610 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dcd6604-d6d4-4147-ab88-eefb780a33b4-operator-scripts\") pod \"6dcd6604-d6d4-4147-ab88-eefb780a33b4\" (UID: \"6dcd6604-d6d4-4147-ab88-eefb780a33b4\") " Dec 10 12:34:14 crc kubenswrapper[4689]: I1210 12:34:14.938147 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dcd6604-d6d4-4147-ab88-eefb780a33b4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6dcd6604-d6d4-4147-ab88-eefb780a33b4" (UID: "6dcd6604-d6d4-4147-ab88-eefb780a33b4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:14 crc kubenswrapper[4689]: I1210 12:34:14.942361 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dcd6604-d6d4-4147-ab88-eefb780a33b4-kube-api-access-v2ksp" (OuterVolumeSpecName: "kube-api-access-v2ksp") pod "6dcd6604-d6d4-4147-ab88-eefb780a33b4" (UID: "6dcd6604-d6d4-4147-ab88-eefb780a33b4"). InnerVolumeSpecName "kube-api-access-v2ksp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:34:14 crc kubenswrapper[4689]: I1210 12:34:14.948527 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5bcc-account-create-update-c6df9" Dec 10 12:34:14 crc kubenswrapper[4689]: I1210 12:34:14.979138 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78a1-account-create-update-5l2cc" Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.035263 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6mg7\" (UniqueName: \"kubernetes.io/projected/7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c-kube-api-access-t6mg7\") pod \"7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c\" (UID: \"7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c\") " Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.036215 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c-operator-scripts\") pod \"7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c\" (UID: \"7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c\") " Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.036574 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2ksp\" (UniqueName: \"kubernetes.io/projected/6dcd6604-d6d4-4147-ab88-eefb780a33b4-kube-api-access-v2ksp\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.036592 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dcd6604-d6d4-4147-ab88-eefb780a33b4-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.036917 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c" (UID: "7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.039824 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c-kube-api-access-t6mg7" (OuterVolumeSpecName: "kube-api-access-t6mg7") pod "7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c" (UID: "7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c"). InnerVolumeSpecName "kube-api-access-t6mg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.053955 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jkpzl" Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.137697 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6369f699-9491-404b-beab-9bb964b73037-operator-scripts\") pod \"6369f699-9491-404b-beab-9bb964b73037\" (UID: \"6369f699-9491-404b-beab-9bb964b73037\") " Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.137841 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qkmb\" (UniqueName: \"kubernetes.io/projected/8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed-kube-api-access-4qkmb\") pod \"8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed\" (UID: \"8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed\") " Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.137887 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8522t\" (UniqueName: \"kubernetes.io/projected/6369f699-9491-404b-beab-9bb964b73037-kube-api-access-8522t\") pod \"6369f699-9491-404b-beab-9bb964b73037\" (UID: \"6369f699-9491-404b-beab-9bb964b73037\") " Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.138024 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed-operator-scripts\") pod \"8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed\" (UID: \"8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed\") " Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.138152 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6369f699-9491-404b-beab-9bb964b73037-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6369f699-9491-404b-beab-9bb964b73037" (UID: "6369f699-9491-404b-beab-9bb964b73037"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.138419 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6mg7\" (UniqueName: \"kubernetes.io/projected/7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c-kube-api-access-t6mg7\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.138429 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed" (UID: "8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.138446 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.138461 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6369f699-9491-404b-beab-9bb964b73037-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.140610 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed-kube-api-access-4qkmb" (OuterVolumeSpecName: "kube-api-access-4qkmb") pod "8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed" (UID: "8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed"). InnerVolumeSpecName "kube-api-access-4qkmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.141081 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6369f699-9491-404b-beab-9bb964b73037-kube-api-access-8522t" (OuterVolumeSpecName: "kube-api-access-8522t") pod "6369f699-9491-404b-beab-9bb964b73037" (UID: "6369f699-9491-404b-beab-9bb964b73037"). InnerVolumeSpecName "kube-api-access-8522t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.239549 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.239599 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qkmb\" (UniqueName: \"kubernetes.io/projected/8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed-kube-api-access-4qkmb\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.239614 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8522t\" (UniqueName: \"kubernetes.io/projected/6369f699-9491-404b-beab-9bb964b73037-kube-api-access-8522t\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.372671 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5bcc-account-create-update-c6df9" event={"ID":"7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c","Type":"ContainerDied","Data":"8bf524dcbe7fa1d6af330c4aec4c30042d7ad62cb12d4748f927488785e70faa"} Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.372724 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bf524dcbe7fa1d6af330c4aec4c30042d7ad62cb12d4748f927488785e70faa" Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.372691 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5bcc-account-create-update-c6df9" Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.374509 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v4mw7" Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.374532 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v4mw7" event={"ID":"6dcd6604-d6d4-4147-ab88-eefb780a33b4","Type":"ContainerDied","Data":"41916ad6c6c534005ee1c766cc567e87dd5c6188d5c8e2c5f29ff56b37b52c88"} Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.374572 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41916ad6c6c534005ee1c766cc567e87dd5c6188d5c8e2c5f29ff56b37b52c88" Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.376468 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jkpzl" event={"ID":"8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed","Type":"ContainerDied","Data":"5081b54462f653f8ac71e126d0e3ea12e58840adfe0b6ad2782e7bef5226162d"} Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.376494 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jkpzl" Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.376501 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5081b54462f653f8ac71e126d0e3ea12e58840adfe0b6ad2782e7bef5226162d" Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.379249 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78a1-account-create-update-5l2cc" event={"ID":"6369f699-9491-404b-beab-9bb964b73037","Type":"ContainerDied","Data":"b52cce32ff43f4b1e95646de9647a79f353eed1caf3cbb4c212d161ecaa86fd7"} Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.379277 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b52cce32ff43f4b1e95646de9647a79f353eed1caf3cbb4c212d161ecaa86fd7" Dec 10 12:34:15 crc kubenswrapper[4689]: I1210 12:34:15.379317 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78a1-account-create-update-5l2cc" Dec 10 12:34:18 crc kubenswrapper[4689]: I1210 12:34:18.697119 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c4b54476-e438-46d8-b234-c8f661f5c26f-etc-swift\") pod \"swift-storage-0\" (UID: \"c4b54476-e438-46d8-b234-c8f661f5c26f\") " pod="openstack/swift-storage-0" Dec 10 12:34:18 crc kubenswrapper[4689]: E1210 12:34:18.697655 4689 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 10 12:34:18 crc kubenswrapper[4689]: E1210 12:34:18.697672 4689 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 10 12:34:18 crc kubenswrapper[4689]: E1210 12:34:18.697722 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c4b54476-e438-46d8-b234-c8f661f5c26f-etc-swift podName:c4b54476-e438-46d8-b234-c8f661f5c26f nodeName:}" failed. No retries permitted until 2025-12-10 12:34:34.697705384 +0000 UTC m=+1142.485786522 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c4b54476-e438-46d8-b234-c8f661f5c26f-etc-swift") pod "swift-storage-0" (UID: "c4b54476-e438-46d8-b234-c8f661f5c26f") : configmap "swift-ring-files" not found Dec 10 12:34:19 crc kubenswrapper[4689]: I1210 12:34:19.408293 4689 generic.go:334] "Generic (PLEG): container finished" podID="a4b24dc3-c04f-46a0-a0c0-0240d3d767cd" containerID="48777b0738c13c4c24711f6972bd750fc3be1d1640f8c54bf98bd7507acca621" exitCode=0 Dec 10 12:34:19 crc kubenswrapper[4689]: I1210 12:34:19.408534 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2mlkq" event={"ID":"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd","Type":"ContainerDied","Data":"48777b0738c13c4c24711f6972bd750fc3be1d1640f8c54bf98bd7507acca621"} Dec 10 12:34:20 crc kubenswrapper[4689]: I1210 12:34:20.419880 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-jpztp" podUID="58cb894b-f745-4d93-8925-193c6ff871a6" containerName="ovn-controller" probeResult="failure" output=< Dec 10 12:34:20 crc kubenswrapper[4689]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 10 12:34:20 crc kubenswrapper[4689]: > Dec 10 12:34:22 crc kubenswrapper[4689]: I1210 12:34:22.449891 4689 generic.go:334] "Generic (PLEG): container finished" podID="33bee83d-eb0f-4e5e-9617-f8102008436a" containerID="40884be95475317286239419fb7c4d5aacd4a03014eda7b158d3dc2103efb907" exitCode=0 Dec 10 12:34:22 crc kubenswrapper[4689]: I1210 12:34:22.450011 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"33bee83d-eb0f-4e5e-9617-f8102008436a","Type":"ContainerDied","Data":"40884be95475317286239419fb7c4d5aacd4a03014eda7b158d3dc2103efb907"} Dec 10 12:34:22 crc kubenswrapper[4689]: I1210 12:34:22.452788 4689 generic.go:334] "Generic (PLEG): container finished" podID="0b5ff1d1-6330-4077-9b2d-dbec926d9e8c" containerID="1a805d013077dbeecc8200f54ac11b45f3ce841f3bdb61918011111fe65869ee" exitCode=0 Dec 10 12:34:22 crc kubenswrapper[4689]: I1210 12:34:22.452820 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c","Type":"ContainerDied","Data":"1a805d013077dbeecc8200f54ac11b45f3ce841f3bdb61918011111fe65869ee"} Dec 10 12:34:24 crc kubenswrapper[4689]: I1210 12:34:24.440722 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2mlkq" Dec 10 12:34:24 crc kubenswrapper[4689]: I1210 12:34:24.493543 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-ring-data-devices\") pod \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\" (UID: \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\") " Dec 10 12:34:24 crc kubenswrapper[4689]: I1210 12:34:24.493670 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-combined-ca-bundle\") pod \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\" (UID: \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\") " Dec 10 12:34:24 crc kubenswrapper[4689]: I1210 12:34:24.493715 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg8nj\" (UniqueName: \"kubernetes.io/projected/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-kube-api-access-rg8nj\") pod \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\" (UID: \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\") " Dec 10 12:34:24 crc kubenswrapper[4689]: I1210 12:34:24.493763 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-scripts\") pod \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\" (UID: \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\") " Dec 10 12:34:24 crc kubenswrapper[4689]: I1210 12:34:24.493790 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-swiftconf\") pod \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\" (UID: \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\") " Dec 10 12:34:24 crc kubenswrapper[4689]: I1210 12:34:24.493812 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-dispersionconf\") pod \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\" (UID: \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\") " Dec 10 12:34:24 crc kubenswrapper[4689]: I1210 12:34:24.493893 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-etc-swift\") pod \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\" (UID: \"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd\") " Dec 10 12:34:24 crc kubenswrapper[4689]: I1210 12:34:24.495402 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "a4b24dc3-c04f-46a0-a0c0-0240d3d767cd" (UID: "a4b24dc3-c04f-46a0-a0c0-0240d3d767cd"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:24 crc kubenswrapper[4689]: I1210 12:34:24.495801 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a4b24dc3-c04f-46a0-a0c0-0240d3d767cd" (UID: "a4b24dc3-c04f-46a0-a0c0-0240d3d767cd"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:34:24 crc kubenswrapper[4689]: I1210 12:34:24.498431 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2mlkq" Dec 10 12:34:24 crc kubenswrapper[4689]: I1210 12:34:24.500396 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-kube-api-access-rg8nj" (OuterVolumeSpecName: "kube-api-access-rg8nj") pod "a4b24dc3-c04f-46a0-a0c0-0240d3d767cd" (UID: "a4b24dc3-c04f-46a0-a0c0-0240d3d767cd"). InnerVolumeSpecName "kube-api-access-rg8nj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:34:24 crc kubenswrapper[4689]: I1210 12:34:24.515177 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "a4b24dc3-c04f-46a0-a0c0-0240d3d767cd" (UID: "a4b24dc3-c04f-46a0-a0c0-0240d3d767cd"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:34:24 crc kubenswrapper[4689]: I1210 12:34:24.527539 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4b24dc3-c04f-46a0-a0c0-0240d3d767cd" (UID: "a4b24dc3-c04f-46a0-a0c0-0240d3d767cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:34:24 crc kubenswrapper[4689]: I1210 12:34:24.530565 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2mlkq" event={"ID":"a4b24dc3-c04f-46a0-a0c0-0240d3d767cd","Type":"ContainerDied","Data":"63ed84cd8ecd7f9e54d924cf6e482e82335864f6248cb4d0ddc796c09fd2dbd4"} Dec 10 12:34:24 crc kubenswrapper[4689]: I1210 12:34:24.530608 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63ed84cd8ecd7f9e54d924cf6e482e82335864f6248cb4d0ddc796c09fd2dbd4" Dec 10 12:34:24 crc kubenswrapper[4689]: I1210 12:34:24.538108 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "a4b24dc3-c04f-46a0-a0c0-0240d3d767cd" (UID: "a4b24dc3-c04f-46a0-a0c0-0240d3d767cd"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:34:24 crc kubenswrapper[4689]: I1210 12:34:24.539283 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-scripts" (OuterVolumeSpecName: "scripts") pod "a4b24dc3-c04f-46a0-a0c0-0240d3d767cd" (UID: "a4b24dc3-c04f-46a0-a0c0-0240d3d767cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:24 crc kubenswrapper[4689]: I1210 12:34:24.595848 4689 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:24 crc kubenswrapper[4689]: I1210 12:34:24.595899 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:24 crc kubenswrapper[4689]: I1210 12:34:24.595909 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg8nj\" (UniqueName: \"kubernetes.io/projected/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-kube-api-access-rg8nj\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:24 crc kubenswrapper[4689]: I1210 12:34:24.595918 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:24 crc kubenswrapper[4689]: I1210 12:34:24.595926 4689 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:24 crc kubenswrapper[4689]: I1210 12:34:24.595934 4689 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:24 crc kubenswrapper[4689]: I1210 12:34:24.595966 4689 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a4b24dc3-c04f-46a0-a0c0-0240d3d767cd-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.400603 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-jpztp" podUID="58cb894b-f745-4d93-8925-193c6ff871a6" containerName="ovn-controller" probeResult="failure" output=< Dec 10 12:34:25 crc kubenswrapper[4689]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 10 12:34:25 crc kubenswrapper[4689]: > Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.425684 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-29j5j" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.427086 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-29j5j" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.508684 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c","Type":"ContainerStarted","Data":"8507c9ed5d3258efd18afab788094f5fef6b1366795b00ac02cb6368d575d696"} Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.508922 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.518471 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-44zkc" event={"ID":"18511d72-4b0d-401d-aa20-6cbf2b26abc6","Type":"ContainerStarted","Data":"9c2c50920fbc2c8656874b4baa8d02921db449f822ca5835a71bee20b9de126b"} Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.531060 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"33bee83d-eb0f-4e5e-9617-f8102008436a","Type":"ContainerStarted","Data":"e6d8bcf4b421eb9c4687520f6c4d62bb997ea71c12e38954d55b5bad9c38c831"} Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.531982 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.553077 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=52.759994193 podStartE2EDuration="1m0.553059068s" podCreationTimestamp="2025-12-10 12:33:25 +0000 UTC" firstStartedPulling="2025-12-10 12:33:40.566311035 +0000 UTC m=+1088.354392173" lastFinishedPulling="2025-12-10 12:33:48.35937591 +0000 UTC m=+1096.147457048" observedRunningTime="2025-12-10 12:34:25.544057384 +0000 UTC m=+1133.332138542" watchObservedRunningTime="2025-12-10 12:34:25.553059068 +0000 UTC m=+1133.341140206" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.594292 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=53.878337396 podStartE2EDuration="1m1.594271281s" podCreationTimestamp="2025-12-10 12:33:24 +0000 UTC" firstStartedPulling="2025-12-10 12:33:40.57774422 +0000 UTC m=+1088.365825358" lastFinishedPulling="2025-12-10 12:33:48.293678095 +0000 UTC m=+1096.081759243" observedRunningTime="2025-12-10 12:34:25.587441901 +0000 UTC m=+1133.375523039" watchObservedRunningTime="2025-12-10 12:34:25.594271281 +0000 UTC m=+1133.382352419" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.615574 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-44zkc" podStartSLOduration=3.3557389 podStartE2EDuration="15.615554519s" podCreationTimestamp="2025-12-10 12:34:10 +0000 UTC" firstStartedPulling="2025-12-10 12:34:12.100221083 +0000 UTC m=+1119.888302221" lastFinishedPulling="2025-12-10 12:34:24.360036682 +0000 UTC m=+1132.148117840" observedRunningTime="2025-12-10 12:34:25.609994241 +0000 UTC m=+1133.398075399" watchObservedRunningTime="2025-12-10 12:34:25.615554519 +0000 UTC m=+1133.403635657" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.701659 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-jpztp-config-x7kbm"] Dec 10 12:34:25 crc kubenswrapper[4689]: E1210 12:34:25.702305 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7934f7f2-2d11-43a1-8a79-002d383f8c34" containerName="dnsmasq-dns" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.702329 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="7934f7f2-2d11-43a1-8a79-002d383f8c34" containerName="dnsmasq-dns" Dec 10 12:34:25 crc kubenswrapper[4689]: E1210 12:34:25.702345 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7934f7f2-2d11-43a1-8a79-002d383f8c34" containerName="init" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.702355 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="7934f7f2-2d11-43a1-8a79-002d383f8c34" containerName="init" Dec 10 12:34:25 crc kubenswrapper[4689]: E1210 12:34:25.702368 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed" containerName="mariadb-database-create" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.702376 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed" containerName="mariadb-database-create" Dec 10 12:34:25 crc kubenswrapper[4689]: E1210 12:34:25.702392 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dcd6604-d6d4-4147-ab88-eefb780a33b4" containerName="mariadb-database-create" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.702399 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dcd6604-d6d4-4147-ab88-eefb780a33b4" containerName="mariadb-database-create" Dec 10 12:34:25 crc kubenswrapper[4689]: E1210 12:34:25.702414 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6369f699-9491-404b-beab-9bb964b73037" containerName="mariadb-account-create-update" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.702421 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="6369f699-9491-404b-beab-9bb964b73037" containerName="mariadb-account-create-update" Dec 10 12:34:25 crc kubenswrapper[4689]: E1210 12:34:25.702435 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4b24dc3-c04f-46a0-a0c0-0240d3d767cd" containerName="swift-ring-rebalance" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.702443 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4b24dc3-c04f-46a0-a0c0-0240d3d767cd" containerName="swift-ring-rebalance" Dec 10 12:34:25 crc kubenswrapper[4689]: E1210 12:34:25.702467 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c" containerName="mariadb-account-create-update" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.702476 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c" containerName="mariadb-account-create-update" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.702684 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dcd6604-d6d4-4147-ab88-eefb780a33b4" containerName="mariadb-database-create" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.702701 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="7934f7f2-2d11-43a1-8a79-002d383f8c34" containerName="dnsmasq-dns" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.702710 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4b24dc3-c04f-46a0-a0c0-0240d3d767cd" containerName="swift-ring-rebalance" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.702719 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed" containerName="mariadb-database-create" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.702736 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c" containerName="mariadb-account-create-update" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.702749 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="6369f699-9491-404b-beab-9bb964b73037" containerName="mariadb-account-create-update" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.703425 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jpztp-config-x7kbm" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.705872 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.711878 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-var-run-ovn\") pod \"ovn-controller-jpztp-config-x7kbm\" (UID: \"4e9bf2dc-2779-4ed0-91c1-cea3ec845559\") " pod="openstack/ovn-controller-jpztp-config-x7kbm" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.712148 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-scripts\") pod \"ovn-controller-jpztp-config-x7kbm\" (UID: \"4e9bf2dc-2779-4ed0-91c1-cea3ec845559\") " pod="openstack/ovn-controller-jpztp-config-x7kbm" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.712374 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-additional-scripts\") pod \"ovn-controller-jpztp-config-x7kbm\" (UID: \"4e9bf2dc-2779-4ed0-91c1-cea3ec845559\") " pod="openstack/ovn-controller-jpztp-config-x7kbm" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.712494 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-var-run\") pod \"ovn-controller-jpztp-config-x7kbm\" (UID: \"4e9bf2dc-2779-4ed0-91c1-cea3ec845559\") " pod="openstack/ovn-controller-jpztp-config-x7kbm" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.712608 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-var-log-ovn\") pod \"ovn-controller-jpztp-config-x7kbm\" (UID: \"4e9bf2dc-2779-4ed0-91c1-cea3ec845559\") " pod="openstack/ovn-controller-jpztp-config-x7kbm" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.712778 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xdlk\" (UniqueName: \"kubernetes.io/projected/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-kube-api-access-2xdlk\") pod \"ovn-controller-jpztp-config-x7kbm\" (UID: \"4e9bf2dc-2779-4ed0-91c1-cea3ec845559\") " pod="openstack/ovn-controller-jpztp-config-x7kbm" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.713788 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jpztp-config-x7kbm"] Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.813946 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-additional-scripts\") pod \"ovn-controller-jpztp-config-x7kbm\" (UID: \"4e9bf2dc-2779-4ed0-91c1-cea3ec845559\") " pod="openstack/ovn-controller-jpztp-config-x7kbm" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.814434 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-var-run\") pod \"ovn-controller-jpztp-config-x7kbm\" (UID: \"4e9bf2dc-2779-4ed0-91c1-cea3ec845559\") " pod="openstack/ovn-controller-jpztp-config-x7kbm" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.814464 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-var-log-ovn\") pod \"ovn-controller-jpztp-config-x7kbm\" (UID: \"4e9bf2dc-2779-4ed0-91c1-cea3ec845559\") " pod="openstack/ovn-controller-jpztp-config-x7kbm" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.814497 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xdlk\" (UniqueName: \"kubernetes.io/projected/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-kube-api-access-2xdlk\") pod \"ovn-controller-jpztp-config-x7kbm\" (UID: \"4e9bf2dc-2779-4ed0-91c1-cea3ec845559\") " pod="openstack/ovn-controller-jpztp-config-x7kbm" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.814536 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-var-run-ovn\") pod \"ovn-controller-jpztp-config-x7kbm\" (UID: \"4e9bf2dc-2779-4ed0-91c1-cea3ec845559\") " pod="openstack/ovn-controller-jpztp-config-x7kbm" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.814573 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-scripts\") pod \"ovn-controller-jpztp-config-x7kbm\" (UID: \"4e9bf2dc-2779-4ed0-91c1-cea3ec845559\") " pod="openstack/ovn-controller-jpztp-config-x7kbm" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.816662 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-scripts\") pod \"ovn-controller-jpztp-config-x7kbm\" (UID: \"4e9bf2dc-2779-4ed0-91c1-cea3ec845559\") " pod="openstack/ovn-controller-jpztp-config-x7kbm" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.816934 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-var-run\") pod \"ovn-controller-jpztp-config-x7kbm\" (UID: \"4e9bf2dc-2779-4ed0-91c1-cea3ec845559\") " pod="openstack/ovn-controller-jpztp-config-x7kbm" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.817010 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-var-log-ovn\") pod \"ovn-controller-jpztp-config-x7kbm\" (UID: \"4e9bf2dc-2779-4ed0-91c1-cea3ec845559\") " pod="openstack/ovn-controller-jpztp-config-x7kbm" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.817336 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-var-run-ovn\") pod \"ovn-controller-jpztp-config-x7kbm\" (UID: \"4e9bf2dc-2779-4ed0-91c1-cea3ec845559\") " pod="openstack/ovn-controller-jpztp-config-x7kbm" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.817696 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-additional-scripts\") pod \"ovn-controller-jpztp-config-x7kbm\" (UID: \"4e9bf2dc-2779-4ed0-91c1-cea3ec845559\") " pod="openstack/ovn-controller-jpztp-config-x7kbm" Dec 10 12:34:25 crc kubenswrapper[4689]: I1210 12:34:25.834126 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xdlk\" (UniqueName: \"kubernetes.io/projected/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-kube-api-access-2xdlk\") pod \"ovn-controller-jpztp-config-x7kbm\" (UID: \"4e9bf2dc-2779-4ed0-91c1-cea3ec845559\") " pod="openstack/ovn-controller-jpztp-config-x7kbm" Dec 10 12:34:26 crc kubenswrapper[4689]: I1210 12:34:26.023280 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jpztp-config-x7kbm" Dec 10 12:34:26 crc kubenswrapper[4689]: I1210 12:34:26.517372 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jpztp-config-x7kbm"] Dec 10 12:34:26 crc kubenswrapper[4689]: I1210 12:34:26.539118 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jpztp-config-x7kbm" event={"ID":"4e9bf2dc-2779-4ed0-91c1-cea3ec845559","Type":"ContainerStarted","Data":"4cc878ee82782845268928e606d5b07a63d6ab2d6ba43f19201672d4db89e2b8"} Dec 10 12:34:27 crc kubenswrapper[4689]: I1210 12:34:27.547172 4689 generic.go:334] "Generic (PLEG): container finished" podID="4e9bf2dc-2779-4ed0-91c1-cea3ec845559" containerID="a7a94454aca86d87588c91f3bc9dad5ca16c502eedd24ea6900927523612e925" exitCode=0 Dec 10 12:34:27 crc kubenswrapper[4689]: I1210 12:34:27.547352 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jpztp-config-x7kbm" event={"ID":"4e9bf2dc-2779-4ed0-91c1-cea3ec845559","Type":"ContainerDied","Data":"a7a94454aca86d87588c91f3bc9dad5ca16c502eedd24ea6900927523612e925"} Dec 10 12:34:28 crc kubenswrapper[4689]: I1210 12:34:28.875858 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jpztp-config-x7kbm" Dec 10 12:34:28 crc kubenswrapper[4689]: I1210 12:34:28.964214 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-var-log-ovn\") pod \"4e9bf2dc-2779-4ed0-91c1-cea3ec845559\" (UID: \"4e9bf2dc-2779-4ed0-91c1-cea3ec845559\") " Dec 10 12:34:28 crc kubenswrapper[4689]: I1210 12:34:28.964312 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-additional-scripts\") pod \"4e9bf2dc-2779-4ed0-91c1-cea3ec845559\" (UID: \"4e9bf2dc-2779-4ed0-91c1-cea3ec845559\") " Dec 10 12:34:28 crc kubenswrapper[4689]: I1210 12:34:28.964320 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "4e9bf2dc-2779-4ed0-91c1-cea3ec845559" (UID: "4e9bf2dc-2779-4ed0-91c1-cea3ec845559"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:34:28 crc kubenswrapper[4689]: I1210 12:34:28.964400 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-var-run-ovn\") pod \"4e9bf2dc-2779-4ed0-91c1-cea3ec845559\" (UID: \"4e9bf2dc-2779-4ed0-91c1-cea3ec845559\") " Dec 10 12:34:28 crc kubenswrapper[4689]: I1210 12:34:28.964444 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-scripts\") pod \"4e9bf2dc-2779-4ed0-91c1-cea3ec845559\" (UID: \"4e9bf2dc-2779-4ed0-91c1-cea3ec845559\") " Dec 10 12:34:28 crc kubenswrapper[4689]: I1210 12:34:28.964468 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xdlk\" (UniqueName: \"kubernetes.io/projected/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-kube-api-access-2xdlk\") pod \"4e9bf2dc-2779-4ed0-91c1-cea3ec845559\" (UID: \"4e9bf2dc-2779-4ed0-91c1-cea3ec845559\") " Dec 10 12:34:28 crc kubenswrapper[4689]: I1210 12:34:28.964510 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-var-run\") pod \"4e9bf2dc-2779-4ed0-91c1-cea3ec845559\" (UID: \"4e9bf2dc-2779-4ed0-91c1-cea3ec845559\") " Dec 10 12:34:28 crc kubenswrapper[4689]: I1210 12:34:28.964899 4689 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:28 crc kubenswrapper[4689]: I1210 12:34:28.964935 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-var-run" (OuterVolumeSpecName: "var-run") pod "4e9bf2dc-2779-4ed0-91c1-cea3ec845559" (UID: "4e9bf2dc-2779-4ed0-91c1-cea3ec845559"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:34:28 crc kubenswrapper[4689]: I1210 12:34:28.964956 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "4e9bf2dc-2779-4ed0-91c1-cea3ec845559" (UID: "4e9bf2dc-2779-4ed0-91c1-cea3ec845559"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:34:28 crc kubenswrapper[4689]: I1210 12:34:28.965065 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "4e9bf2dc-2779-4ed0-91c1-cea3ec845559" (UID: "4e9bf2dc-2779-4ed0-91c1-cea3ec845559"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:28 crc kubenswrapper[4689]: I1210 12:34:28.965733 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-scripts" (OuterVolumeSpecName: "scripts") pod "4e9bf2dc-2779-4ed0-91c1-cea3ec845559" (UID: "4e9bf2dc-2779-4ed0-91c1-cea3ec845559"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:28 crc kubenswrapper[4689]: I1210 12:34:28.988199 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-kube-api-access-2xdlk" (OuterVolumeSpecName: "kube-api-access-2xdlk") pod "4e9bf2dc-2779-4ed0-91c1-cea3ec845559" (UID: "4e9bf2dc-2779-4ed0-91c1-cea3ec845559"). InnerVolumeSpecName "kube-api-access-2xdlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:34:29 crc kubenswrapper[4689]: I1210 12:34:29.066634 4689 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:29 crc kubenswrapper[4689]: I1210 12:34:29.066664 4689 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:29 crc kubenswrapper[4689]: I1210 12:34:29.066673 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:29 crc kubenswrapper[4689]: I1210 12:34:29.066681 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xdlk\" (UniqueName: \"kubernetes.io/projected/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-kube-api-access-2xdlk\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:29 crc kubenswrapper[4689]: I1210 12:34:29.066691 4689 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4e9bf2dc-2779-4ed0-91c1-cea3ec845559-var-run\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:29 crc kubenswrapper[4689]: I1210 12:34:29.565225 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jpztp-config-x7kbm" event={"ID":"4e9bf2dc-2779-4ed0-91c1-cea3ec845559","Type":"ContainerDied","Data":"4cc878ee82782845268928e606d5b07a63d6ab2d6ba43f19201672d4db89e2b8"} Dec 10 12:34:29 crc kubenswrapper[4689]: I1210 12:34:29.565272 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cc878ee82782845268928e606d5b07a63d6ab2d6ba43f19201672d4db89e2b8" Dec 10 12:34:29 crc kubenswrapper[4689]: I1210 12:34:29.565335 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jpztp-config-x7kbm" Dec 10 12:34:29 crc kubenswrapper[4689]: I1210 12:34:29.991548 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-jpztp-config-x7kbm"] Dec 10 12:34:30 crc kubenswrapper[4689]: I1210 12:34:30.012136 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-jpztp-config-x7kbm"] Dec 10 12:34:30 crc kubenswrapper[4689]: I1210 12:34:30.396583 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-jpztp" Dec 10 12:34:30 crc kubenswrapper[4689]: I1210 12:34:30.508308 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e9bf2dc-2779-4ed0-91c1-cea3ec845559" path="/var/lib/kubelet/pods/4e9bf2dc-2779-4ed0-91c1-cea3ec845559/volumes" Dec 10 12:34:31 crc kubenswrapper[4689]: I1210 12:34:31.579602 4689 generic.go:334] "Generic (PLEG): container finished" podID="18511d72-4b0d-401d-aa20-6cbf2b26abc6" containerID="9c2c50920fbc2c8656874b4baa8d02921db449f822ca5835a71bee20b9de126b" exitCode=0 Dec 10 12:34:31 crc kubenswrapper[4689]: I1210 12:34:31.579871 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-44zkc" event={"ID":"18511d72-4b0d-401d-aa20-6cbf2b26abc6","Type":"ContainerDied","Data":"9c2c50920fbc2c8656874b4baa8d02921db449f822ca5835a71bee20b9de126b"} Dec 10 12:34:32 crc kubenswrapper[4689]: I1210 12:34:32.949832 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-44zkc" Dec 10 12:34:33 crc kubenswrapper[4689]: I1210 12:34:33.030066 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwl6s\" (UniqueName: \"kubernetes.io/projected/18511d72-4b0d-401d-aa20-6cbf2b26abc6-kube-api-access-fwl6s\") pod \"18511d72-4b0d-401d-aa20-6cbf2b26abc6\" (UID: \"18511d72-4b0d-401d-aa20-6cbf2b26abc6\") " Dec 10 12:34:33 crc kubenswrapper[4689]: I1210 12:34:33.030187 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/18511d72-4b0d-401d-aa20-6cbf2b26abc6-db-sync-config-data\") pod \"18511d72-4b0d-401d-aa20-6cbf2b26abc6\" (UID: \"18511d72-4b0d-401d-aa20-6cbf2b26abc6\") " Dec 10 12:34:33 crc kubenswrapper[4689]: I1210 12:34:33.030233 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18511d72-4b0d-401d-aa20-6cbf2b26abc6-combined-ca-bundle\") pod \"18511d72-4b0d-401d-aa20-6cbf2b26abc6\" (UID: \"18511d72-4b0d-401d-aa20-6cbf2b26abc6\") " Dec 10 12:34:33 crc kubenswrapper[4689]: I1210 12:34:33.030252 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18511d72-4b0d-401d-aa20-6cbf2b26abc6-config-data\") pod \"18511d72-4b0d-401d-aa20-6cbf2b26abc6\" (UID: \"18511d72-4b0d-401d-aa20-6cbf2b26abc6\") " Dec 10 12:34:33 crc kubenswrapper[4689]: I1210 12:34:33.035127 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18511d72-4b0d-401d-aa20-6cbf2b26abc6-kube-api-access-fwl6s" (OuterVolumeSpecName: "kube-api-access-fwl6s") pod "18511d72-4b0d-401d-aa20-6cbf2b26abc6" (UID: "18511d72-4b0d-401d-aa20-6cbf2b26abc6"). InnerVolumeSpecName "kube-api-access-fwl6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:34:33 crc kubenswrapper[4689]: I1210 12:34:33.035179 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18511d72-4b0d-401d-aa20-6cbf2b26abc6-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "18511d72-4b0d-401d-aa20-6cbf2b26abc6" (UID: "18511d72-4b0d-401d-aa20-6cbf2b26abc6"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:34:33 crc kubenswrapper[4689]: I1210 12:34:33.052506 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18511d72-4b0d-401d-aa20-6cbf2b26abc6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18511d72-4b0d-401d-aa20-6cbf2b26abc6" (UID: "18511d72-4b0d-401d-aa20-6cbf2b26abc6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:34:33 crc kubenswrapper[4689]: I1210 12:34:33.065496 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18511d72-4b0d-401d-aa20-6cbf2b26abc6-config-data" (OuterVolumeSpecName: "config-data") pod "18511d72-4b0d-401d-aa20-6cbf2b26abc6" (UID: "18511d72-4b0d-401d-aa20-6cbf2b26abc6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:34:33 crc kubenswrapper[4689]: I1210 12:34:33.131641 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18511d72-4b0d-401d-aa20-6cbf2b26abc6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:33 crc kubenswrapper[4689]: I1210 12:34:33.131671 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18511d72-4b0d-401d-aa20-6cbf2b26abc6-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:33 crc kubenswrapper[4689]: I1210 12:34:33.131680 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwl6s\" (UniqueName: \"kubernetes.io/projected/18511d72-4b0d-401d-aa20-6cbf2b26abc6-kube-api-access-fwl6s\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:33 crc kubenswrapper[4689]: I1210 12:34:33.131690 4689 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/18511d72-4b0d-401d-aa20-6cbf2b26abc6-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:33 crc kubenswrapper[4689]: I1210 12:34:33.621209 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-44zkc" event={"ID":"18511d72-4b0d-401d-aa20-6cbf2b26abc6","Type":"ContainerDied","Data":"ad3101834194c8315d67eff09834c9cbf91b80fbd38b4672febc3e23d079fdfb"} Dec 10 12:34:33 crc kubenswrapper[4689]: I1210 12:34:33.621262 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad3101834194c8315d67eff09834c9cbf91b80fbd38b4672febc3e23d079fdfb" Dec 10 12:34:33 crc kubenswrapper[4689]: I1210 12:34:33.621284 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-44zkc" Dec 10 12:34:33 crc kubenswrapper[4689]: I1210 12:34:33.869296 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-47knk"] Dec 10 12:34:33 crc kubenswrapper[4689]: E1210 12:34:33.869592 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e9bf2dc-2779-4ed0-91c1-cea3ec845559" containerName="ovn-config" Dec 10 12:34:33 crc kubenswrapper[4689]: I1210 12:34:33.869612 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e9bf2dc-2779-4ed0-91c1-cea3ec845559" containerName="ovn-config" Dec 10 12:34:33 crc kubenswrapper[4689]: E1210 12:34:33.869647 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18511d72-4b0d-401d-aa20-6cbf2b26abc6" containerName="glance-db-sync" Dec 10 12:34:33 crc kubenswrapper[4689]: I1210 12:34:33.869655 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="18511d72-4b0d-401d-aa20-6cbf2b26abc6" containerName="glance-db-sync" Dec 10 12:34:33 crc kubenswrapper[4689]: I1210 12:34:33.869822 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e9bf2dc-2779-4ed0-91c1-cea3ec845559" containerName="ovn-config" Dec 10 12:34:33 crc kubenswrapper[4689]: I1210 12:34:33.869849 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="18511d72-4b0d-401d-aa20-6cbf2b26abc6" containerName="glance-db-sync" Dec 10 12:34:33 crc kubenswrapper[4689]: I1210 12:34:33.870698 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-47knk" Dec 10 12:34:33 crc kubenswrapper[4689]: I1210 12:34:33.889624 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-47knk"] Dec 10 12:34:33 crc kubenswrapper[4689]: I1210 12:34:33.943571 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35-config\") pod \"dnsmasq-dns-5b946c75cc-47knk\" (UID: \"f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35\") " pod="openstack/dnsmasq-dns-5b946c75cc-47knk" Dec 10 12:34:33 crc kubenswrapper[4689]: I1210 12:34:33.943628 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-47knk\" (UID: \"f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35\") " pod="openstack/dnsmasq-dns-5b946c75cc-47knk" Dec 10 12:34:33 crc kubenswrapper[4689]: I1210 12:34:33.943662 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-47knk\" (UID: \"f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35\") " pod="openstack/dnsmasq-dns-5b946c75cc-47knk" Dec 10 12:34:33 crc kubenswrapper[4689]: I1210 12:34:33.943690 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-47knk\" (UID: \"f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35\") " pod="openstack/dnsmasq-dns-5b946c75cc-47knk" Dec 10 12:34:33 crc kubenswrapper[4689]: I1210 12:34:33.943792 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vbmk\" (UniqueName: \"kubernetes.io/projected/f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35-kube-api-access-7vbmk\") pod \"dnsmasq-dns-5b946c75cc-47knk\" (UID: \"f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35\") " pod="openstack/dnsmasq-dns-5b946c75cc-47knk" Dec 10 12:34:34 crc kubenswrapper[4689]: I1210 12:34:34.045317 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-47knk\" (UID: \"f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35\") " pod="openstack/dnsmasq-dns-5b946c75cc-47knk" Dec 10 12:34:34 crc kubenswrapper[4689]: I1210 12:34:34.045375 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-47knk\" (UID: \"f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35\") " pod="openstack/dnsmasq-dns-5b946c75cc-47knk" Dec 10 12:34:34 crc kubenswrapper[4689]: I1210 12:34:34.045488 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vbmk\" (UniqueName: \"kubernetes.io/projected/f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35-kube-api-access-7vbmk\") pod \"dnsmasq-dns-5b946c75cc-47knk\" (UID: \"f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35\") " pod="openstack/dnsmasq-dns-5b946c75cc-47knk" Dec 10 12:34:34 crc kubenswrapper[4689]: I1210 12:34:34.045561 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35-config\") pod \"dnsmasq-dns-5b946c75cc-47knk\" (UID: \"f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35\") " pod="openstack/dnsmasq-dns-5b946c75cc-47knk" Dec 10 12:34:34 crc kubenswrapper[4689]: I1210 12:34:34.045584 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-47knk\" (UID: \"f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35\") " pod="openstack/dnsmasq-dns-5b946c75cc-47knk" Dec 10 12:34:34 crc kubenswrapper[4689]: I1210 12:34:34.046524 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-47knk\" (UID: \"f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35\") " pod="openstack/dnsmasq-dns-5b946c75cc-47knk" Dec 10 12:34:34 crc kubenswrapper[4689]: I1210 12:34:34.047119 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-47knk\" (UID: \"f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35\") " pod="openstack/dnsmasq-dns-5b946c75cc-47knk" Dec 10 12:34:34 crc kubenswrapper[4689]: I1210 12:34:34.047224 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35-config\") pod \"dnsmasq-dns-5b946c75cc-47knk\" (UID: \"f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35\") " pod="openstack/dnsmasq-dns-5b946c75cc-47knk" Dec 10 12:34:34 crc kubenswrapper[4689]: I1210 12:34:34.047343 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-47knk\" (UID: \"f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35\") " pod="openstack/dnsmasq-dns-5b946c75cc-47knk" Dec 10 12:34:34 crc kubenswrapper[4689]: I1210 12:34:34.066430 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vbmk\" (UniqueName: \"kubernetes.io/projected/f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35-kube-api-access-7vbmk\") pod \"dnsmasq-dns-5b946c75cc-47knk\" (UID: \"f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35\") " pod="openstack/dnsmasq-dns-5b946c75cc-47knk" Dec 10 12:34:34 crc kubenswrapper[4689]: I1210 12:34:34.188335 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-47knk" Dec 10 12:34:34 crc kubenswrapper[4689]: I1210 12:34:34.719538 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-47knk"] Dec 10 12:34:34 crc kubenswrapper[4689]: I1210 12:34:34.756483 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c4b54476-e438-46d8-b234-c8f661f5c26f-etc-swift\") pod \"swift-storage-0\" (UID: \"c4b54476-e438-46d8-b234-c8f661f5c26f\") " pod="openstack/swift-storage-0" Dec 10 12:34:34 crc kubenswrapper[4689]: I1210 12:34:34.765816 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c4b54476-e438-46d8-b234-c8f661f5c26f-etc-swift\") pod \"swift-storage-0\" (UID: \"c4b54476-e438-46d8-b234-c8f661f5c26f\") " pod="openstack/swift-storage-0" Dec 10 12:34:34 crc kubenswrapper[4689]: I1210 12:34:34.844877 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 10 12:34:35 crc kubenswrapper[4689]: I1210 12:34:35.388190 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 10 12:34:35 crc kubenswrapper[4689]: I1210 12:34:35.651943 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c4b54476-e438-46d8-b234-c8f661f5c26f","Type":"ContainerStarted","Data":"33bd71104406cf17686b008d85811d47b3527f17b8a0490ec2b1ab0fee4d3995"} Dec 10 12:34:35 crc kubenswrapper[4689]: I1210 12:34:35.653854 4689 generic.go:334] "Generic (PLEG): container finished" podID="f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35" containerID="d77c04c19a19fc030efd89aa6cfa9fc23646f0d174be21e755febe67b1fba4dd" exitCode=0 Dec 10 12:34:35 crc kubenswrapper[4689]: I1210 12:34:35.654008 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-47knk" event={"ID":"f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35","Type":"ContainerDied","Data":"d77c04c19a19fc030efd89aa6cfa9fc23646f0d174be21e755febe67b1fba4dd"} Dec 10 12:34:35 crc kubenswrapper[4689]: I1210 12:34:35.654580 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-47knk" event={"ID":"f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35","Type":"ContainerStarted","Data":"b53052e62a7867cebefeca23c079760d45bb251566109920c3afa33de84bf7ec"} Dec 10 12:34:36 crc kubenswrapper[4689]: I1210 12:34:36.568316 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 10 12:34:36 crc kubenswrapper[4689]: I1210 12:34:36.635283 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:34:36 crc kubenswrapper[4689]: I1210 12:34:36.671054 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-47knk" event={"ID":"f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35","Type":"ContainerStarted","Data":"71316d508dd23a3da218b19a4de4ff395eecf266f93f0bb071a559700c73d351"} Dec 10 12:34:36 crc kubenswrapper[4689]: I1210 12:34:36.671352 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b946c75cc-47knk" Dec 10 12:34:36 crc kubenswrapper[4689]: I1210 12:34:36.718356 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b946c75cc-47knk" podStartSLOduration=3.718335894 podStartE2EDuration="3.718335894s" podCreationTimestamp="2025-12-10 12:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:34:36.710541161 +0000 UTC m=+1144.498622309" watchObservedRunningTime="2025-12-10 12:34:36.718335894 +0000 UTC m=+1144.506417042" Dec 10 12:34:36 crc kubenswrapper[4689]: I1210 12:34:36.928019 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-kr898"] Dec 10 12:34:36 crc kubenswrapper[4689]: I1210 12:34:36.928985 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kr898" Dec 10 12:34:36 crc kubenswrapper[4689]: I1210 12:34:36.932257 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-kr898"] Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.057058 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-9t4ht"] Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.058105 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9t4ht" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.065689 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-9t4ht"] Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.105334 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwdv4\" (UniqueName: \"kubernetes.io/projected/36da9d71-f25c-4a4b-96b7-439c4f96bda8-kube-api-access-jwdv4\") pod \"cinder-db-create-kr898\" (UID: \"36da9d71-f25c-4a4b-96b7-439c4f96bda8\") " pod="openstack/cinder-db-create-kr898" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.105661 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36da9d71-f25c-4a4b-96b7-439c4f96bda8-operator-scripts\") pod \"cinder-db-create-kr898\" (UID: \"36da9d71-f25c-4a4b-96b7-439c4f96bda8\") " pod="openstack/cinder-db-create-kr898" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.126929 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-3216-account-create-update-m6phc"] Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.136708 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3216-account-create-update-m6phc"] Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.136855 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3216-account-create-update-m6phc" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.139454 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.166274 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.166336 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.166371 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.167085 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bf7bbf9875a5b9cc37e5a62ace29b6dd6e4de888067fb82c65a8956ea2149bad"} pod="openshift-machine-config-operator/machine-config-daemon-db6zk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.167135 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" containerID="cri-o://bf7bbf9875a5b9cc37e5a62ace29b6dd6e4de888067fb82c65a8956ea2149bad" gracePeriod=600 Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.207208 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g9vs\" (UniqueName: \"kubernetes.io/projected/10d3bc49-4f1e-4e47-b49c-ebda98a43aa5-kube-api-access-4g9vs\") pod \"barbican-db-create-9t4ht\" (UID: \"10d3bc49-4f1e-4e47-b49c-ebda98a43aa5\") " pod="openstack/barbican-db-create-9t4ht" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.207309 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36da9d71-f25c-4a4b-96b7-439c4f96bda8-operator-scripts\") pod \"cinder-db-create-kr898\" (UID: \"36da9d71-f25c-4a4b-96b7-439c4f96bda8\") " pod="openstack/cinder-db-create-kr898" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.207352 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10d3bc49-4f1e-4e47-b49c-ebda98a43aa5-operator-scripts\") pod \"barbican-db-create-9t4ht\" (UID: \"10d3bc49-4f1e-4e47-b49c-ebda98a43aa5\") " pod="openstack/barbican-db-create-9t4ht" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.207425 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwdv4\" (UniqueName: \"kubernetes.io/projected/36da9d71-f25c-4a4b-96b7-439c4f96bda8-kube-api-access-jwdv4\") pod \"cinder-db-create-kr898\" (UID: \"36da9d71-f25c-4a4b-96b7-439c4f96bda8\") " pod="openstack/cinder-db-create-kr898" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.208493 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36da9d71-f25c-4a4b-96b7-439c4f96bda8-operator-scripts\") pod \"cinder-db-create-kr898\" (UID: \"36da9d71-f25c-4a4b-96b7-439c4f96bda8\") " pod="openstack/cinder-db-create-kr898" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.233686 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwdv4\" (UniqueName: \"kubernetes.io/projected/36da9d71-f25c-4a4b-96b7-439c4f96bda8-kube-api-access-jwdv4\") pod \"cinder-db-create-kr898\" (UID: \"36da9d71-f25c-4a4b-96b7-439c4f96bda8\") " pod="openstack/cinder-db-create-kr898" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.242755 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-b5f6-account-create-update-scpqx"] Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.243788 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b5f6-account-create-update-scpqx" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.248685 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.268000 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-b5f6-account-create-update-scpqx"] Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.309414 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g9vs\" (UniqueName: \"kubernetes.io/projected/10d3bc49-4f1e-4e47-b49c-ebda98a43aa5-kube-api-access-4g9vs\") pod \"barbican-db-create-9t4ht\" (UID: \"10d3bc49-4f1e-4e47-b49c-ebda98a43aa5\") " pod="openstack/barbican-db-create-9t4ht" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.309491 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1fa6986-f4e1-4b99-8def-9f9017c41cb7-operator-scripts\") pod \"cinder-3216-account-create-update-m6phc\" (UID: \"d1fa6986-f4e1-4b99-8def-9f9017c41cb7\") " pod="openstack/cinder-3216-account-create-update-m6phc" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.309557 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10d3bc49-4f1e-4e47-b49c-ebda98a43aa5-operator-scripts\") pod \"barbican-db-create-9t4ht\" (UID: \"10d3bc49-4f1e-4e47-b49c-ebda98a43aa5\") " pod="openstack/barbican-db-create-9t4ht" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.309622 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvcrm\" (UniqueName: \"kubernetes.io/projected/d1fa6986-f4e1-4b99-8def-9f9017c41cb7-kube-api-access-tvcrm\") pod \"cinder-3216-account-create-update-m6phc\" (UID: \"d1fa6986-f4e1-4b99-8def-9f9017c41cb7\") " pod="openstack/cinder-3216-account-create-update-m6phc" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.310582 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10d3bc49-4f1e-4e47-b49c-ebda98a43aa5-operator-scripts\") pod \"barbican-db-create-9t4ht\" (UID: \"10d3bc49-4f1e-4e47-b49c-ebda98a43aa5\") " pod="openstack/barbican-db-create-9t4ht" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.314202 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kr898" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.325602 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-6h5md"] Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.326859 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6h5md" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.357849 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-6h5md"] Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.365407 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g9vs\" (UniqueName: \"kubernetes.io/projected/10d3bc49-4f1e-4e47-b49c-ebda98a43aa5-kube-api-access-4g9vs\") pod \"barbican-db-create-9t4ht\" (UID: \"10d3bc49-4f1e-4e47-b49c-ebda98a43aa5\") " pod="openstack/barbican-db-create-9t4ht" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.378017 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9t4ht" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.410523 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtq57\" (UniqueName: \"kubernetes.io/projected/98a0a6c5-b397-41af-8c50-6b1c662515e0-kube-api-access-mtq57\") pod \"barbican-b5f6-account-create-update-scpqx\" (UID: \"98a0a6c5-b397-41af-8c50-6b1c662515e0\") " pod="openstack/barbican-b5f6-account-create-update-scpqx" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.410711 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1fa6986-f4e1-4b99-8def-9f9017c41cb7-operator-scripts\") pod \"cinder-3216-account-create-update-m6phc\" (UID: \"d1fa6986-f4e1-4b99-8def-9f9017c41cb7\") " pod="openstack/cinder-3216-account-create-update-m6phc" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.410747 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98a0a6c5-b397-41af-8c50-6b1c662515e0-operator-scripts\") pod \"barbican-b5f6-account-create-update-scpqx\" (UID: \"98a0a6c5-b397-41af-8c50-6b1c662515e0\") " pod="openstack/barbican-b5f6-account-create-update-scpqx" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.410821 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvcrm\" (UniqueName: \"kubernetes.io/projected/d1fa6986-f4e1-4b99-8def-9f9017c41cb7-kube-api-access-tvcrm\") pod \"cinder-3216-account-create-update-m6phc\" (UID: \"d1fa6986-f4e1-4b99-8def-9f9017c41cb7\") " pod="openstack/cinder-3216-account-create-update-m6phc" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.411704 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1fa6986-f4e1-4b99-8def-9f9017c41cb7-operator-scripts\") pod \"cinder-3216-account-create-update-m6phc\" (UID: \"d1fa6986-f4e1-4b99-8def-9f9017c41cb7\") " pod="openstack/cinder-3216-account-create-update-m6phc" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.431533 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvcrm\" (UniqueName: \"kubernetes.io/projected/d1fa6986-f4e1-4b99-8def-9f9017c41cb7-kube-api-access-tvcrm\") pod \"cinder-3216-account-create-update-m6phc\" (UID: \"d1fa6986-f4e1-4b99-8def-9f9017c41cb7\") " pod="openstack/cinder-3216-account-create-update-m6phc" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.460957 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3216-account-create-update-m6phc" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.511889 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/387ed17c-05e1-4311-853d-cae57b4bdbec-operator-scripts\") pod \"neutron-db-create-6h5md\" (UID: \"387ed17c-05e1-4311-853d-cae57b4bdbec\") " pod="openstack/neutron-db-create-6h5md" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.511948 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dns6n\" (UniqueName: \"kubernetes.io/projected/387ed17c-05e1-4311-853d-cae57b4bdbec-kube-api-access-dns6n\") pod \"neutron-db-create-6h5md\" (UID: \"387ed17c-05e1-4311-853d-cae57b4bdbec\") " pod="openstack/neutron-db-create-6h5md" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.512012 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtq57\" (UniqueName: \"kubernetes.io/projected/98a0a6c5-b397-41af-8c50-6b1c662515e0-kube-api-access-mtq57\") pod \"barbican-b5f6-account-create-update-scpqx\" (UID: \"98a0a6c5-b397-41af-8c50-6b1c662515e0\") " pod="openstack/barbican-b5f6-account-create-update-scpqx" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.512074 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98a0a6c5-b397-41af-8c50-6b1c662515e0-operator-scripts\") pod \"barbican-b5f6-account-create-update-scpqx\" (UID: \"98a0a6c5-b397-41af-8c50-6b1c662515e0\") " pod="openstack/barbican-b5f6-account-create-update-scpqx" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.512675 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-brgtz"] Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.512721 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98a0a6c5-b397-41af-8c50-6b1c662515e0-operator-scripts\") pod \"barbican-b5f6-account-create-update-scpqx\" (UID: \"98a0a6c5-b397-41af-8c50-6b1c662515e0\") " pod="openstack/barbican-b5f6-account-create-update-scpqx" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.518161 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-brgtz" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.520830 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5qbhw" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.521220 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.521245 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.523319 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.553101 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtq57\" (UniqueName: \"kubernetes.io/projected/98a0a6c5-b397-41af-8c50-6b1c662515e0-kube-api-access-mtq57\") pod \"barbican-b5f6-account-create-update-scpqx\" (UID: \"98a0a6c5-b397-41af-8c50-6b1c662515e0\") " pod="openstack/barbican-b5f6-account-create-update-scpqx" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.595201 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-brgtz"] Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.624207 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8336932-b518-4fd5-8a96-895454de5855-combined-ca-bundle\") pod \"keystone-db-sync-brgtz\" (UID: \"b8336932-b518-4fd5-8a96-895454de5855\") " pod="openstack/keystone-db-sync-brgtz" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.624503 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/387ed17c-05e1-4311-853d-cae57b4bdbec-operator-scripts\") pod \"neutron-db-create-6h5md\" (UID: \"387ed17c-05e1-4311-853d-cae57b4bdbec\") " pod="openstack/neutron-db-create-6h5md" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.624528 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8336932-b518-4fd5-8a96-895454de5855-config-data\") pod \"keystone-db-sync-brgtz\" (UID: \"b8336932-b518-4fd5-8a96-895454de5855\") " pod="openstack/keystone-db-sync-brgtz" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.624572 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dns6n\" (UniqueName: \"kubernetes.io/projected/387ed17c-05e1-4311-853d-cae57b4bdbec-kube-api-access-dns6n\") pod \"neutron-db-create-6h5md\" (UID: \"387ed17c-05e1-4311-853d-cae57b4bdbec\") " pod="openstack/neutron-db-create-6h5md" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.624615 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59tmz\" (UniqueName: \"kubernetes.io/projected/b8336932-b518-4fd5-8a96-895454de5855-kube-api-access-59tmz\") pod \"keystone-db-sync-brgtz\" (UID: \"b8336932-b518-4fd5-8a96-895454de5855\") " pod="openstack/keystone-db-sync-brgtz" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.627268 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/387ed17c-05e1-4311-853d-cae57b4bdbec-operator-scripts\") pod \"neutron-db-create-6h5md\" (UID: \"387ed17c-05e1-4311-853d-cae57b4bdbec\") " pod="openstack/neutron-db-create-6h5md" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.651733 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dns6n\" (UniqueName: \"kubernetes.io/projected/387ed17c-05e1-4311-853d-cae57b4bdbec-kube-api-access-dns6n\") pod \"neutron-db-create-6h5md\" (UID: \"387ed17c-05e1-4311-853d-cae57b4bdbec\") " pod="openstack/neutron-db-create-6h5md" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.666197 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b5f6-account-create-update-scpqx" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.675052 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-9e51-account-create-update-pdw2b"] Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.676141 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9e51-account-create-update-pdw2b" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.677071 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6h5md" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.681150 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.712635 4689 generic.go:334] "Generic (PLEG): container finished" podID="a41ebdcd-910f-4669-992d-296e1a92162b" containerID="bf7bbf9875a5b9cc37e5a62ace29b6dd6e4de888067fb82c65a8956ea2149bad" exitCode=0 Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.712727 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" event={"ID":"a41ebdcd-910f-4669-992d-296e1a92162b","Type":"ContainerDied","Data":"bf7bbf9875a5b9cc37e5a62ace29b6dd6e4de888067fb82c65a8956ea2149bad"} Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.712762 4689 scope.go:117] "RemoveContainer" containerID="6e5c15a4c10b86079bc45e52f2bd74ade92056d772116ec22ae9d0a1a5a11fd9" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.727060 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8336932-b518-4fd5-8a96-895454de5855-config-data\") pod \"keystone-db-sync-brgtz\" (UID: \"b8336932-b518-4fd5-8a96-895454de5855\") " pod="openstack/keystone-db-sync-brgtz" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.727139 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59tmz\" (UniqueName: \"kubernetes.io/projected/b8336932-b518-4fd5-8a96-895454de5855-kube-api-access-59tmz\") pod \"keystone-db-sync-brgtz\" (UID: \"b8336932-b518-4fd5-8a96-895454de5855\") " pod="openstack/keystone-db-sync-brgtz" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.727202 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8336932-b518-4fd5-8a96-895454de5855-combined-ca-bundle\") pod \"keystone-db-sync-brgtz\" (UID: \"b8336932-b518-4fd5-8a96-895454de5855\") " pod="openstack/keystone-db-sync-brgtz" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.729762 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9e51-account-create-update-pdw2b"] Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.733628 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8336932-b518-4fd5-8a96-895454de5855-config-data\") pod \"keystone-db-sync-brgtz\" (UID: \"b8336932-b518-4fd5-8a96-895454de5855\") " pod="openstack/keystone-db-sync-brgtz" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.742839 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c4b54476-e438-46d8-b234-c8f661f5c26f","Type":"ContainerStarted","Data":"c91d85b7404e3f390cbf43559690eb2c094c1148c46442dfc6801d8fb8bf7d6d"} Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.742891 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c4b54476-e438-46d8-b234-c8f661f5c26f","Type":"ContainerStarted","Data":"ddcbd1c4bf0981d9d7d8cd401c7763e79015730f0a45eec8b2e826c5dff38022"} Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.742901 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c4b54476-e438-46d8-b234-c8f661f5c26f","Type":"ContainerStarted","Data":"dbdbb5cc7412a148240a22b503c93488c23e1e4e5ea4d9cf80271887a8893c40"} Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.743578 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8336932-b518-4fd5-8a96-895454de5855-combined-ca-bundle\") pod \"keystone-db-sync-brgtz\" (UID: \"b8336932-b518-4fd5-8a96-895454de5855\") " pod="openstack/keystone-db-sync-brgtz" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.748667 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59tmz\" (UniqueName: \"kubernetes.io/projected/b8336932-b518-4fd5-8a96-895454de5855-kube-api-access-59tmz\") pod \"keystone-db-sync-brgtz\" (UID: \"b8336932-b518-4fd5-8a96-895454de5855\") " pod="openstack/keystone-db-sync-brgtz" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.828987 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjdcm\" (UniqueName: \"kubernetes.io/projected/006043c5-508c-44c9-9b45-73756a05c173-kube-api-access-cjdcm\") pod \"neutron-9e51-account-create-update-pdw2b\" (UID: \"006043c5-508c-44c9-9b45-73756a05c173\") " pod="openstack/neutron-9e51-account-create-update-pdw2b" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.829036 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/006043c5-508c-44c9-9b45-73756a05c173-operator-scripts\") pod \"neutron-9e51-account-create-update-pdw2b\" (UID: \"006043c5-508c-44c9-9b45-73756a05c173\") " pod="openstack/neutron-9e51-account-create-update-pdw2b" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.880455 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-brgtz" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.884299 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-kr898"] Dec 10 12:34:37 crc kubenswrapper[4689]: W1210 12:34:37.894877 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36da9d71_f25c_4a4b_96b7_439c4f96bda8.slice/crio-71a96aef4896988de326008e649f7bc6b7264e94066efabc58efa87dd9e2ea27 WatchSource:0}: Error finding container 71a96aef4896988de326008e649f7bc6b7264e94066efabc58efa87dd9e2ea27: Status 404 returned error can't find the container with id 71a96aef4896988de326008e649f7bc6b7264e94066efabc58efa87dd9e2ea27 Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.930154 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjdcm\" (UniqueName: \"kubernetes.io/projected/006043c5-508c-44c9-9b45-73756a05c173-kube-api-access-cjdcm\") pod \"neutron-9e51-account-create-update-pdw2b\" (UID: \"006043c5-508c-44c9-9b45-73756a05c173\") " pod="openstack/neutron-9e51-account-create-update-pdw2b" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.930222 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/006043c5-508c-44c9-9b45-73756a05c173-operator-scripts\") pod \"neutron-9e51-account-create-update-pdw2b\" (UID: \"006043c5-508c-44c9-9b45-73756a05c173\") " pod="openstack/neutron-9e51-account-create-update-pdw2b" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.930772 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/006043c5-508c-44c9-9b45-73756a05c173-operator-scripts\") pod \"neutron-9e51-account-create-update-pdw2b\" (UID: \"006043c5-508c-44c9-9b45-73756a05c173\") " pod="openstack/neutron-9e51-account-create-update-pdw2b" Dec 10 12:34:37 crc kubenswrapper[4689]: I1210 12:34:37.948110 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjdcm\" (UniqueName: \"kubernetes.io/projected/006043c5-508c-44c9-9b45-73756a05c173-kube-api-access-cjdcm\") pod \"neutron-9e51-account-create-update-pdw2b\" (UID: \"006043c5-508c-44c9-9b45-73756a05c173\") " pod="openstack/neutron-9e51-account-create-update-pdw2b" Dec 10 12:34:38 crc kubenswrapper[4689]: I1210 12:34:38.045598 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-9t4ht"] Dec 10 12:34:38 crc kubenswrapper[4689]: W1210 12:34:38.060305 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10d3bc49_4f1e_4e47_b49c_ebda98a43aa5.slice/crio-17b0f1be2640282bf3a19c7df422df104279cb0494e73132f821cee94d9891d5 WatchSource:0}: Error finding container 17b0f1be2640282bf3a19c7df422df104279cb0494e73132f821cee94d9891d5: Status 404 returned error can't find the container with id 17b0f1be2640282bf3a19c7df422df104279cb0494e73132f821cee94d9891d5 Dec 10 12:34:38 crc kubenswrapper[4689]: I1210 12:34:38.123572 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3216-account-create-update-m6phc"] Dec 10 12:34:38 crc kubenswrapper[4689]: I1210 12:34:38.134289 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9e51-account-create-update-pdw2b" Dec 10 12:34:38 crc kubenswrapper[4689]: I1210 12:34:38.279935 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-b5f6-account-create-update-scpqx"] Dec 10 12:34:38 crc kubenswrapper[4689]: I1210 12:34:38.288413 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-6h5md"] Dec 10 12:34:38 crc kubenswrapper[4689]: I1210 12:34:38.415436 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-brgtz"] Dec 10 12:34:38 crc kubenswrapper[4689]: I1210 12:34:38.617091 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9e51-account-create-update-pdw2b"] Dec 10 12:34:38 crc kubenswrapper[4689]: W1210 12:34:38.629199 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod006043c5_508c_44c9_9b45_73756a05c173.slice/crio-8ad3216e5067ec2285f42727cf33ffb3cb8e339fd09ce619a1e92cfdfff693fd WatchSource:0}: Error finding container 8ad3216e5067ec2285f42727cf33ffb3cb8e339fd09ce619a1e92cfdfff693fd: Status 404 returned error can't find the container with id 8ad3216e5067ec2285f42727cf33ffb3cb8e339fd09ce619a1e92cfdfff693fd Dec 10 12:34:38 crc kubenswrapper[4689]: I1210 12:34:38.755944 4689 generic.go:334] "Generic (PLEG): container finished" podID="36da9d71-f25c-4a4b-96b7-439c4f96bda8" containerID="8c1f6cc7183b4deb370578c6dcd58cf3d165147b49fbbff79fa67f6462275a50" exitCode=0 Dec 10 12:34:38 crc kubenswrapper[4689]: I1210 12:34:38.756029 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kr898" event={"ID":"36da9d71-f25c-4a4b-96b7-439c4f96bda8","Type":"ContainerDied","Data":"8c1f6cc7183b4deb370578c6dcd58cf3d165147b49fbbff79fa67f6462275a50"} Dec 10 12:34:38 crc kubenswrapper[4689]: I1210 12:34:38.756060 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kr898" event={"ID":"36da9d71-f25c-4a4b-96b7-439c4f96bda8","Type":"ContainerStarted","Data":"71a96aef4896988de326008e649f7bc6b7264e94066efabc58efa87dd9e2ea27"} Dec 10 12:34:38 crc kubenswrapper[4689]: I1210 12:34:38.758390 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" event={"ID":"a41ebdcd-910f-4669-992d-296e1a92162b","Type":"ContainerStarted","Data":"c7f561a90578da7003727a817e3d47870ce7681703176f0fc2d8b1240ef4ff23"} Dec 10 12:34:38 crc kubenswrapper[4689]: I1210 12:34:38.760353 4689 generic.go:334] "Generic (PLEG): container finished" podID="d1fa6986-f4e1-4b99-8def-9f9017c41cb7" containerID="8a94fd552ebe3de425adc247c9e27347dee701869c5a3835d40466b288871234" exitCode=0 Dec 10 12:34:38 crc kubenswrapper[4689]: I1210 12:34:38.760400 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3216-account-create-update-m6phc" event={"ID":"d1fa6986-f4e1-4b99-8def-9f9017c41cb7","Type":"ContainerDied","Data":"8a94fd552ebe3de425adc247c9e27347dee701869c5a3835d40466b288871234"} Dec 10 12:34:38 crc kubenswrapper[4689]: I1210 12:34:38.760418 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3216-account-create-update-m6phc" event={"ID":"d1fa6986-f4e1-4b99-8def-9f9017c41cb7","Type":"ContainerStarted","Data":"a31ff7943b2018a962945c9f31815048ff240e6cb6c6ca2a631c7839c2e05644"} Dec 10 12:34:38 crc kubenswrapper[4689]: I1210 12:34:38.762314 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-brgtz" event={"ID":"b8336932-b518-4fd5-8a96-895454de5855","Type":"ContainerStarted","Data":"ca82addec8fc7d9d18de12d64bf0986bea3a21278f23c6408318d0d454ce2faa"} Dec 10 12:34:38 crc kubenswrapper[4689]: I1210 12:34:38.763238 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9e51-account-create-update-pdw2b" event={"ID":"006043c5-508c-44c9-9b45-73756a05c173","Type":"ContainerStarted","Data":"8ad3216e5067ec2285f42727cf33ffb3cb8e339fd09ce619a1e92cfdfff693fd"} Dec 10 12:34:38 crc kubenswrapper[4689]: I1210 12:34:38.764630 4689 generic.go:334] "Generic (PLEG): container finished" podID="10d3bc49-4f1e-4e47-b49c-ebda98a43aa5" containerID="e2654ed5d1c4887a4f741ead5fa8bf6b724ff716a69eee105059c04b4c509305" exitCode=0 Dec 10 12:34:38 crc kubenswrapper[4689]: I1210 12:34:38.764681 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-9t4ht" event={"ID":"10d3bc49-4f1e-4e47-b49c-ebda98a43aa5","Type":"ContainerDied","Data":"e2654ed5d1c4887a4f741ead5fa8bf6b724ff716a69eee105059c04b4c509305"} Dec 10 12:34:38 crc kubenswrapper[4689]: I1210 12:34:38.764699 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-9t4ht" event={"ID":"10d3bc49-4f1e-4e47-b49c-ebda98a43aa5","Type":"ContainerStarted","Data":"17b0f1be2640282bf3a19c7df422df104279cb0494e73132f821cee94d9891d5"} Dec 10 12:34:38 crc kubenswrapper[4689]: I1210 12:34:38.766270 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6h5md" event={"ID":"387ed17c-05e1-4311-853d-cae57b4bdbec","Type":"ContainerStarted","Data":"183716ddac517cdb6ec9ef6c651e5b365b096e474296f7200711cbe660f28b56"} Dec 10 12:34:38 crc kubenswrapper[4689]: I1210 12:34:38.766318 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6h5md" event={"ID":"387ed17c-05e1-4311-853d-cae57b4bdbec","Type":"ContainerStarted","Data":"dd0cf3567e401679fc627b068e775ce062a5115ce475ef5e72302da682b524b0"} Dec 10 12:34:38 crc kubenswrapper[4689]: I1210 12:34:38.767786 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b5f6-account-create-update-scpqx" event={"ID":"98a0a6c5-b397-41af-8c50-6b1c662515e0","Type":"ContainerStarted","Data":"0548c773f6e6ae05b1579f3976f9c2253e028d6eee0257a53211b444bc340ae0"} Dec 10 12:34:38 crc kubenswrapper[4689]: I1210 12:34:38.767838 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b5f6-account-create-update-scpqx" event={"ID":"98a0a6c5-b397-41af-8c50-6b1c662515e0","Type":"ContainerStarted","Data":"829ce2c2c3cebaf22cc746ceb886bb14ed51b3dc8ed23c766984d0ab01dfd040"} Dec 10 12:34:38 crc kubenswrapper[4689]: I1210 12:34:38.772249 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c4b54476-e438-46d8-b234-c8f661f5c26f","Type":"ContainerStarted","Data":"5daaa629ec84560c61f97686ef4953122d09e6527243ac34e172ec46c3700b61"} Dec 10 12:34:38 crc kubenswrapper[4689]: I1210 12:34:38.804345 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-b5f6-account-create-update-scpqx" podStartSLOduration=1.804329939 podStartE2EDuration="1.804329939s" podCreationTimestamp="2025-12-10 12:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:34:38.797062138 +0000 UTC m=+1146.585143276" watchObservedRunningTime="2025-12-10 12:34:38.804329939 +0000 UTC m=+1146.592411077" Dec 10 12:34:39 crc kubenswrapper[4689]: I1210 12:34:39.786697 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c4b54476-e438-46d8-b234-c8f661f5c26f","Type":"ContainerStarted","Data":"e80398d4b8976204ff20c4e5f99f5cbf79405440b33949be8e48d2ebd3eaefb7"} Dec 10 12:34:39 crc kubenswrapper[4689]: I1210 12:34:39.789710 4689 generic.go:334] "Generic (PLEG): container finished" podID="006043c5-508c-44c9-9b45-73756a05c173" containerID="b77fe9cd6bbda252985911173368b42469ec7d4e39ef06da3ae67b3bb7072e5d" exitCode=0 Dec 10 12:34:39 crc kubenswrapper[4689]: I1210 12:34:39.789786 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9e51-account-create-update-pdw2b" event={"ID":"006043c5-508c-44c9-9b45-73756a05c173","Type":"ContainerDied","Data":"b77fe9cd6bbda252985911173368b42469ec7d4e39ef06da3ae67b3bb7072e5d"} Dec 10 12:34:39 crc kubenswrapper[4689]: I1210 12:34:39.791758 4689 generic.go:334] "Generic (PLEG): container finished" podID="387ed17c-05e1-4311-853d-cae57b4bdbec" containerID="183716ddac517cdb6ec9ef6c651e5b365b096e474296f7200711cbe660f28b56" exitCode=0 Dec 10 12:34:39 crc kubenswrapper[4689]: I1210 12:34:39.791809 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6h5md" event={"ID":"387ed17c-05e1-4311-853d-cae57b4bdbec","Type":"ContainerDied","Data":"183716ddac517cdb6ec9ef6c651e5b365b096e474296f7200711cbe660f28b56"} Dec 10 12:34:39 crc kubenswrapper[4689]: I1210 12:34:39.801327 4689 generic.go:334] "Generic (PLEG): container finished" podID="98a0a6c5-b397-41af-8c50-6b1c662515e0" containerID="0548c773f6e6ae05b1579f3976f9c2253e028d6eee0257a53211b444bc340ae0" exitCode=0 Dec 10 12:34:39 crc kubenswrapper[4689]: I1210 12:34:39.801672 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b5f6-account-create-update-scpqx" event={"ID":"98a0a6c5-b397-41af-8c50-6b1c662515e0","Type":"ContainerDied","Data":"0548c773f6e6ae05b1579f3976f9c2253e028d6eee0257a53211b444bc340ae0"} Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.182823 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9t4ht" Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.276148 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10d3bc49-4f1e-4e47-b49c-ebda98a43aa5-operator-scripts\") pod \"10d3bc49-4f1e-4e47-b49c-ebda98a43aa5\" (UID: \"10d3bc49-4f1e-4e47-b49c-ebda98a43aa5\") " Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.276243 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g9vs\" (UniqueName: \"kubernetes.io/projected/10d3bc49-4f1e-4e47-b49c-ebda98a43aa5-kube-api-access-4g9vs\") pod \"10d3bc49-4f1e-4e47-b49c-ebda98a43aa5\" (UID: \"10d3bc49-4f1e-4e47-b49c-ebda98a43aa5\") " Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.277601 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10d3bc49-4f1e-4e47-b49c-ebda98a43aa5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "10d3bc49-4f1e-4e47-b49c-ebda98a43aa5" (UID: "10d3bc49-4f1e-4e47-b49c-ebda98a43aa5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.286344 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10d3bc49-4f1e-4e47-b49c-ebda98a43aa5-kube-api-access-4g9vs" (OuterVolumeSpecName: "kube-api-access-4g9vs") pod "10d3bc49-4f1e-4e47-b49c-ebda98a43aa5" (UID: "10d3bc49-4f1e-4e47-b49c-ebda98a43aa5"). InnerVolumeSpecName "kube-api-access-4g9vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.313216 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3216-account-create-update-m6phc" Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.376765 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6h5md" Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.377116 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvcrm\" (UniqueName: \"kubernetes.io/projected/d1fa6986-f4e1-4b99-8def-9f9017c41cb7-kube-api-access-tvcrm\") pod \"d1fa6986-f4e1-4b99-8def-9f9017c41cb7\" (UID: \"d1fa6986-f4e1-4b99-8def-9f9017c41cb7\") " Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.377256 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1fa6986-f4e1-4b99-8def-9f9017c41cb7-operator-scripts\") pod \"d1fa6986-f4e1-4b99-8def-9f9017c41cb7\" (UID: \"d1fa6986-f4e1-4b99-8def-9f9017c41cb7\") " Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.377514 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10d3bc49-4f1e-4e47-b49c-ebda98a43aa5-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.377527 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g9vs\" (UniqueName: \"kubernetes.io/projected/10d3bc49-4f1e-4e47-b49c-ebda98a43aa5-kube-api-access-4g9vs\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.378026 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1fa6986-f4e1-4b99-8def-9f9017c41cb7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d1fa6986-f4e1-4b99-8def-9f9017c41cb7" (UID: "d1fa6986-f4e1-4b99-8def-9f9017c41cb7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.383606 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1fa6986-f4e1-4b99-8def-9f9017c41cb7-kube-api-access-tvcrm" (OuterVolumeSpecName: "kube-api-access-tvcrm") pod "d1fa6986-f4e1-4b99-8def-9f9017c41cb7" (UID: "d1fa6986-f4e1-4b99-8def-9f9017c41cb7"). InnerVolumeSpecName "kube-api-access-tvcrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.418224 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kr898" Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.478780 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dns6n\" (UniqueName: \"kubernetes.io/projected/387ed17c-05e1-4311-853d-cae57b4bdbec-kube-api-access-dns6n\") pod \"387ed17c-05e1-4311-853d-cae57b4bdbec\" (UID: \"387ed17c-05e1-4311-853d-cae57b4bdbec\") " Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.478881 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/387ed17c-05e1-4311-853d-cae57b4bdbec-operator-scripts\") pod \"387ed17c-05e1-4311-853d-cae57b4bdbec\" (UID: \"387ed17c-05e1-4311-853d-cae57b4bdbec\") " Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.479019 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36da9d71-f25c-4a4b-96b7-439c4f96bda8-operator-scripts\") pod \"36da9d71-f25c-4a4b-96b7-439c4f96bda8\" (UID: \"36da9d71-f25c-4a4b-96b7-439c4f96bda8\") " Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.479084 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwdv4\" (UniqueName: \"kubernetes.io/projected/36da9d71-f25c-4a4b-96b7-439c4f96bda8-kube-api-access-jwdv4\") pod \"36da9d71-f25c-4a4b-96b7-439c4f96bda8\" (UID: \"36da9d71-f25c-4a4b-96b7-439c4f96bda8\") " Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.479852 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36da9d71-f25c-4a4b-96b7-439c4f96bda8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "36da9d71-f25c-4a4b-96b7-439c4f96bda8" (UID: "36da9d71-f25c-4a4b-96b7-439c4f96bda8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.480221 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36da9d71-f25c-4a4b-96b7-439c4f96bda8-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.480250 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvcrm\" (UniqueName: \"kubernetes.io/projected/d1fa6986-f4e1-4b99-8def-9f9017c41cb7-kube-api-access-tvcrm\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.480264 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1fa6986-f4e1-4b99-8def-9f9017c41cb7-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.482281 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36da9d71-f25c-4a4b-96b7-439c4f96bda8-kube-api-access-jwdv4" (OuterVolumeSpecName: "kube-api-access-jwdv4") pod "36da9d71-f25c-4a4b-96b7-439c4f96bda8" (UID: "36da9d71-f25c-4a4b-96b7-439c4f96bda8"). InnerVolumeSpecName "kube-api-access-jwdv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.483000 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/387ed17c-05e1-4311-853d-cae57b4bdbec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "387ed17c-05e1-4311-853d-cae57b4bdbec" (UID: "387ed17c-05e1-4311-853d-cae57b4bdbec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.483280 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/387ed17c-05e1-4311-853d-cae57b4bdbec-kube-api-access-dns6n" (OuterVolumeSpecName: "kube-api-access-dns6n") pod "387ed17c-05e1-4311-853d-cae57b4bdbec" (UID: "387ed17c-05e1-4311-853d-cae57b4bdbec"). InnerVolumeSpecName "kube-api-access-dns6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.581316 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dns6n\" (UniqueName: \"kubernetes.io/projected/387ed17c-05e1-4311-853d-cae57b4bdbec-kube-api-access-dns6n\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.581350 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/387ed17c-05e1-4311-853d-cae57b4bdbec-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.581359 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwdv4\" (UniqueName: \"kubernetes.io/projected/36da9d71-f25c-4a4b-96b7-439c4f96bda8-kube-api-access-jwdv4\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.825495 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-9t4ht" event={"ID":"10d3bc49-4f1e-4e47-b49c-ebda98a43aa5","Type":"ContainerDied","Data":"17b0f1be2640282bf3a19c7df422df104279cb0494e73132f821cee94d9891d5"} Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.825519 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9t4ht" Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.825540 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17b0f1be2640282bf3a19c7df422df104279cb0494e73132f821cee94d9891d5" Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.827273 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6h5md" Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.827267 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6h5md" event={"ID":"387ed17c-05e1-4311-853d-cae57b4bdbec","Type":"ContainerDied","Data":"dd0cf3567e401679fc627b068e775ce062a5115ce475ef5e72302da682b524b0"} Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.827311 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd0cf3567e401679fc627b068e775ce062a5115ce475ef5e72302da682b524b0" Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.829596 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3216-account-create-update-m6phc" Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.829626 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3216-account-create-update-m6phc" event={"ID":"d1fa6986-f4e1-4b99-8def-9f9017c41cb7","Type":"ContainerDied","Data":"a31ff7943b2018a962945c9f31815048ff240e6cb6c6ca2a631c7839c2e05644"} Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.829653 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a31ff7943b2018a962945c9f31815048ff240e6cb6c6ca2a631c7839c2e05644" Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.833892 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c4b54476-e438-46d8-b234-c8f661f5c26f","Type":"ContainerStarted","Data":"3ab8a878ac8f5fd21c12f5f70465933682a8b6055b793f2ea58de68a857bd46d"} Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.833935 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c4b54476-e438-46d8-b234-c8f661f5c26f","Type":"ContainerStarted","Data":"faad8456ee69457a0353e1abcb436f5c6bbcc792842299ee20cb2143cd8734f5"} Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.833947 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c4b54476-e438-46d8-b234-c8f661f5c26f","Type":"ContainerStarted","Data":"bc2da2f9288967d01aa79bf64a7ac7f94c475f86753b84e2854fc6f7fbe56c3e"} Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.835282 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kr898" event={"ID":"36da9d71-f25c-4a4b-96b7-439c4f96bda8","Type":"ContainerDied","Data":"71a96aef4896988de326008e649f7bc6b7264e94066efabc58efa87dd9e2ea27"} Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.835309 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71a96aef4896988de326008e649f7bc6b7264e94066efabc58efa87dd9e2ea27" Dec 10 12:34:40 crc kubenswrapper[4689]: I1210 12:34:40.835353 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kr898" Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.191114 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b946c75cc-47knk" Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.246595 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-xrcb2"] Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.246856 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-xrcb2" podUID="e86fae08-d350-4939-a3fb-1131e3f5e5a2" containerName="dnsmasq-dns" containerID="cri-o://c297b8623e09a9567b18e84d35e592c9e81d4fb6f3de4116f69e39a981d64543" gracePeriod=10 Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.557353 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9e51-account-create-update-pdw2b" Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.580749 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b5f6-account-create-update-scpqx" Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.647159 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-xrcb2" Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.745208 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e86fae08-d350-4939-a3fb-1131e3f5e5a2-config\") pod \"e86fae08-d350-4939-a3fb-1131e3f5e5a2\" (UID: \"e86fae08-d350-4939-a3fb-1131e3f5e5a2\") " Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.745387 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjdcm\" (UniqueName: \"kubernetes.io/projected/006043c5-508c-44c9-9b45-73756a05c173-kube-api-access-cjdcm\") pod \"006043c5-508c-44c9-9b45-73756a05c173\" (UID: \"006043c5-508c-44c9-9b45-73756a05c173\") " Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.745489 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e86fae08-d350-4939-a3fb-1131e3f5e5a2-ovsdbserver-nb\") pod \"e86fae08-d350-4939-a3fb-1131e3f5e5a2\" (UID: \"e86fae08-d350-4939-a3fb-1131e3f5e5a2\") " Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.745527 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e86fae08-d350-4939-a3fb-1131e3f5e5a2-ovsdbserver-sb\") pod \"e86fae08-d350-4939-a3fb-1131e3f5e5a2\" (UID: \"e86fae08-d350-4939-a3fb-1131e3f5e5a2\") " Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.745573 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/006043c5-508c-44c9-9b45-73756a05c173-operator-scripts\") pod \"006043c5-508c-44c9-9b45-73756a05c173\" (UID: \"006043c5-508c-44c9-9b45-73756a05c173\") " Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.745657 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phlw9\" (UniqueName: \"kubernetes.io/projected/e86fae08-d350-4939-a3fb-1131e3f5e5a2-kube-api-access-phlw9\") pod \"e86fae08-d350-4939-a3fb-1131e3f5e5a2\" (UID: \"e86fae08-d350-4939-a3fb-1131e3f5e5a2\") " Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.745686 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98a0a6c5-b397-41af-8c50-6b1c662515e0-operator-scripts\") pod \"98a0a6c5-b397-41af-8c50-6b1c662515e0\" (UID: \"98a0a6c5-b397-41af-8c50-6b1c662515e0\") " Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.745718 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtq57\" (UniqueName: \"kubernetes.io/projected/98a0a6c5-b397-41af-8c50-6b1c662515e0-kube-api-access-mtq57\") pod \"98a0a6c5-b397-41af-8c50-6b1c662515e0\" (UID: \"98a0a6c5-b397-41af-8c50-6b1c662515e0\") " Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.745744 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e86fae08-d350-4939-a3fb-1131e3f5e5a2-dns-svc\") pod \"e86fae08-d350-4939-a3fb-1131e3f5e5a2\" (UID: \"e86fae08-d350-4939-a3fb-1131e3f5e5a2\") " Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.746201 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98a0a6c5-b397-41af-8c50-6b1c662515e0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "98a0a6c5-b397-41af-8c50-6b1c662515e0" (UID: "98a0a6c5-b397-41af-8c50-6b1c662515e0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.746556 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/006043c5-508c-44c9-9b45-73756a05c173-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "006043c5-508c-44c9-9b45-73756a05c173" (UID: "006043c5-508c-44c9-9b45-73756a05c173"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.749325 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98a0a6c5-b397-41af-8c50-6b1c662515e0-kube-api-access-mtq57" (OuterVolumeSpecName: "kube-api-access-mtq57") pod "98a0a6c5-b397-41af-8c50-6b1c662515e0" (UID: "98a0a6c5-b397-41af-8c50-6b1c662515e0"). InnerVolumeSpecName "kube-api-access-mtq57". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.749672 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/006043c5-508c-44c9-9b45-73756a05c173-kube-api-access-cjdcm" (OuterVolumeSpecName: "kube-api-access-cjdcm") pod "006043c5-508c-44c9-9b45-73756a05c173" (UID: "006043c5-508c-44c9-9b45-73756a05c173"). InnerVolumeSpecName "kube-api-access-cjdcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.750058 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e86fae08-d350-4939-a3fb-1131e3f5e5a2-kube-api-access-phlw9" (OuterVolumeSpecName: "kube-api-access-phlw9") pod "e86fae08-d350-4939-a3fb-1131e3f5e5a2" (UID: "e86fae08-d350-4939-a3fb-1131e3f5e5a2"). InnerVolumeSpecName "kube-api-access-phlw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.783629 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e86fae08-d350-4939-a3fb-1131e3f5e5a2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e86fae08-d350-4939-a3fb-1131e3f5e5a2" (UID: "e86fae08-d350-4939-a3fb-1131e3f5e5a2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.783733 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e86fae08-d350-4939-a3fb-1131e3f5e5a2-config" (OuterVolumeSpecName: "config") pod "e86fae08-d350-4939-a3fb-1131e3f5e5a2" (UID: "e86fae08-d350-4939-a3fb-1131e3f5e5a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.783623 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e86fae08-d350-4939-a3fb-1131e3f5e5a2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e86fae08-d350-4939-a3fb-1131e3f5e5a2" (UID: "e86fae08-d350-4939-a3fb-1131e3f5e5a2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.783901 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e86fae08-d350-4939-a3fb-1131e3f5e5a2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e86fae08-d350-4939-a3fb-1131e3f5e5a2" (UID: "e86fae08-d350-4939-a3fb-1131e3f5e5a2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.848041 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phlw9\" (UniqueName: \"kubernetes.io/projected/e86fae08-d350-4939-a3fb-1131e3f5e5a2-kube-api-access-phlw9\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.848087 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98a0a6c5-b397-41af-8c50-6b1c662515e0-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.848099 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtq57\" (UniqueName: \"kubernetes.io/projected/98a0a6c5-b397-41af-8c50-6b1c662515e0-kube-api-access-mtq57\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.848111 4689 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e86fae08-d350-4939-a3fb-1131e3f5e5a2-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.848123 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e86fae08-d350-4939-a3fb-1131e3f5e5a2-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.848133 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjdcm\" (UniqueName: \"kubernetes.io/projected/006043c5-508c-44c9-9b45-73756a05c173-kube-api-access-cjdcm\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.848143 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e86fae08-d350-4939-a3fb-1131e3f5e5a2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.848154 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e86fae08-d350-4939-a3fb-1131e3f5e5a2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.848166 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/006043c5-508c-44c9-9b45-73756a05c173-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.875114 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9e51-account-create-update-pdw2b" event={"ID":"006043c5-508c-44c9-9b45-73756a05c173","Type":"ContainerDied","Data":"8ad3216e5067ec2285f42727cf33ffb3cb8e339fd09ce619a1e92cfdfff693fd"} Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.875152 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ad3216e5067ec2285f42727cf33ffb3cb8e339fd09ce619a1e92cfdfff693fd" Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.875163 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9e51-account-create-update-pdw2b" Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.876390 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-brgtz" event={"ID":"b8336932-b518-4fd5-8a96-895454de5855","Type":"ContainerStarted","Data":"22dc2abd908f65340ee1be610486603b9c5c5f41097bb4b1aa066b4b65657941"} Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.879650 4689 generic.go:334] "Generic (PLEG): container finished" podID="e86fae08-d350-4939-a3fb-1131e3f5e5a2" containerID="c297b8623e09a9567b18e84d35e592c9e81d4fb6f3de4116f69e39a981d64543" exitCode=0 Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.879693 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-xrcb2" event={"ID":"e86fae08-d350-4939-a3fb-1131e3f5e5a2","Type":"ContainerDied","Data":"c297b8623e09a9567b18e84d35e592c9e81d4fb6f3de4116f69e39a981d64543"} Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.879745 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-xrcb2" Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.880016 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-xrcb2" event={"ID":"e86fae08-d350-4939-a3fb-1131e3f5e5a2","Type":"ContainerDied","Data":"bd5278b86ee9bae9c8d7c2bb6d428658eface77e98e6a0be57e77297f5e12cda"} Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.880047 4689 scope.go:117] "RemoveContainer" containerID="c297b8623e09a9567b18e84d35e592c9e81d4fb6f3de4116f69e39a981d64543" Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.889882 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b5f6-account-create-update-scpqx" event={"ID":"98a0a6c5-b397-41af-8c50-6b1c662515e0","Type":"ContainerDied","Data":"829ce2c2c3cebaf22cc746ceb886bb14ed51b3dc8ed23c766984d0ab01dfd040"} Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.890060 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="829ce2c2c3cebaf22cc746ceb886bb14ed51b3dc8ed23c766984d0ab01dfd040" Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.890215 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b5f6-account-create-update-scpqx" Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.892801 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-brgtz" podStartSLOduration=1.983541227 podStartE2EDuration="7.892787153s" podCreationTimestamp="2025-12-10 12:34:37 +0000 UTC" firstStartedPulling="2025-12-10 12:34:38.436142648 +0000 UTC m=+1146.224223786" lastFinishedPulling="2025-12-10 12:34:44.345388554 +0000 UTC m=+1152.133469712" observedRunningTime="2025-12-10 12:34:44.891370978 +0000 UTC m=+1152.679452136" watchObservedRunningTime="2025-12-10 12:34:44.892787153 +0000 UTC m=+1152.680868291" Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.935165 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-xrcb2"] Dec 10 12:34:44 crc kubenswrapper[4689]: I1210 12:34:44.944464 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-xrcb2"] Dec 10 12:34:45 crc kubenswrapper[4689]: I1210 12:34:45.018138 4689 scope.go:117] "RemoveContainer" containerID="d13ced407fdd31f5941494b0fab63411f2b5403ab560a149f2f0252fbbda38ea" Dec 10 12:34:45 crc kubenswrapper[4689]: I1210 12:34:45.037922 4689 scope.go:117] "RemoveContainer" containerID="c297b8623e09a9567b18e84d35e592c9e81d4fb6f3de4116f69e39a981d64543" Dec 10 12:34:45 crc kubenswrapper[4689]: E1210 12:34:45.038412 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c297b8623e09a9567b18e84d35e592c9e81d4fb6f3de4116f69e39a981d64543\": container with ID starting with c297b8623e09a9567b18e84d35e592c9e81d4fb6f3de4116f69e39a981d64543 not found: ID does not exist" containerID="c297b8623e09a9567b18e84d35e592c9e81d4fb6f3de4116f69e39a981d64543" Dec 10 12:34:45 crc kubenswrapper[4689]: I1210 12:34:45.038476 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c297b8623e09a9567b18e84d35e592c9e81d4fb6f3de4116f69e39a981d64543"} err="failed to get container status \"c297b8623e09a9567b18e84d35e592c9e81d4fb6f3de4116f69e39a981d64543\": rpc error: code = NotFound desc = could not find container \"c297b8623e09a9567b18e84d35e592c9e81d4fb6f3de4116f69e39a981d64543\": container with ID starting with c297b8623e09a9567b18e84d35e592c9e81d4fb6f3de4116f69e39a981d64543 not found: ID does not exist" Dec 10 12:34:45 crc kubenswrapper[4689]: I1210 12:34:45.038512 4689 scope.go:117] "RemoveContainer" containerID="d13ced407fdd31f5941494b0fab63411f2b5403ab560a149f2f0252fbbda38ea" Dec 10 12:34:45 crc kubenswrapper[4689]: E1210 12:34:45.039013 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d13ced407fdd31f5941494b0fab63411f2b5403ab560a149f2f0252fbbda38ea\": container with ID starting with d13ced407fdd31f5941494b0fab63411f2b5403ab560a149f2f0252fbbda38ea not found: ID does not exist" containerID="d13ced407fdd31f5941494b0fab63411f2b5403ab560a149f2f0252fbbda38ea" Dec 10 12:34:45 crc kubenswrapper[4689]: I1210 12:34:45.039052 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d13ced407fdd31f5941494b0fab63411f2b5403ab560a149f2f0252fbbda38ea"} err="failed to get container status \"d13ced407fdd31f5941494b0fab63411f2b5403ab560a149f2f0252fbbda38ea\": rpc error: code = NotFound desc = could not find container \"d13ced407fdd31f5941494b0fab63411f2b5403ab560a149f2f0252fbbda38ea\": container with ID starting with d13ced407fdd31f5941494b0fab63411f2b5403ab560a149f2f0252fbbda38ea not found: ID does not exist" Dec 10 12:34:45 crc kubenswrapper[4689]: I1210 12:34:45.901803 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c4b54476-e438-46d8-b234-c8f661f5c26f","Type":"ContainerStarted","Data":"7744d822cde1f92a6ec3b0f41d9ba37ce50d6c0583b008234aa7f45c9d80e1d1"} Dec 10 12:34:45 crc kubenswrapper[4689]: I1210 12:34:45.902168 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c4b54476-e438-46d8-b234-c8f661f5c26f","Type":"ContainerStarted","Data":"530385c38d0c2d98ae62ed17cb4a40ffd0138d9997a5cd2090c4e238ed55d54d"} Dec 10 12:34:45 crc kubenswrapper[4689]: I1210 12:34:45.902181 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c4b54476-e438-46d8-b234-c8f661f5c26f","Type":"ContainerStarted","Data":"97abc4f1026c531747a3157e88f418102d4015faac6d44a42a5a3b5787f3a7f1"} Dec 10 12:34:45 crc kubenswrapper[4689]: I1210 12:34:45.902192 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c4b54476-e438-46d8-b234-c8f661f5c26f","Type":"ContainerStarted","Data":"4e46bf1cf28985e145f46d10270e447a54df2f480b0d6a98c4adb22d96432dd1"} Dec 10 12:34:46 crc kubenswrapper[4689]: I1210 12:34:46.515324 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e86fae08-d350-4939-a3fb-1131e3f5e5a2" path="/var/lib/kubelet/pods/e86fae08-d350-4939-a3fb-1131e3f5e5a2/volumes" Dec 10 12:34:46 crc kubenswrapper[4689]: I1210 12:34:46.932442 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c4b54476-e438-46d8-b234-c8f661f5c26f","Type":"ContainerStarted","Data":"07122a17505723949270f7d80177fd11df3bc6167dacb6eb87bcc51174052faf"} Dec 10 12:34:47 crc kubenswrapper[4689]: I1210 12:34:47.947657 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c4b54476-e438-46d8-b234-c8f661f5c26f","Type":"ContainerStarted","Data":"5d38280be64523d4ed23cb5b6683424b992f50857f254c4c4f6c6aef610842cf"} Dec 10 12:34:47 crc kubenswrapper[4689]: I1210 12:34:47.948016 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c4b54476-e438-46d8-b234-c8f661f5c26f","Type":"ContainerStarted","Data":"36e795a21233259ceb69413c29ea5a59c32fbfe5640dd5c8c860c0a27ae2b63d"} Dec 10 12:34:47 crc kubenswrapper[4689]: I1210 12:34:47.992525 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.319333328 podStartE2EDuration="46.992502003s" podCreationTimestamp="2025-12-10 12:34:01 +0000 UTC" firstStartedPulling="2025-12-10 12:34:35.407058812 +0000 UTC m=+1143.195139990" lastFinishedPulling="2025-12-10 12:34:45.080227527 +0000 UTC m=+1152.868308665" observedRunningTime="2025-12-10 12:34:47.984162356 +0000 UTC m=+1155.772243514" watchObservedRunningTime="2025-12-10 12:34:47.992502003 +0000 UTC m=+1155.780583151" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.270352 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-jtfkf"] Dec 10 12:34:48 crc kubenswrapper[4689]: E1210 12:34:48.270857 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e86fae08-d350-4939-a3fb-1131e3f5e5a2" containerName="init" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.270874 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="e86fae08-d350-4939-a3fb-1131e3f5e5a2" containerName="init" Dec 10 12:34:48 crc kubenswrapper[4689]: E1210 12:34:48.270887 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="387ed17c-05e1-4311-853d-cae57b4bdbec" containerName="mariadb-database-create" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.270894 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="387ed17c-05e1-4311-853d-cae57b4bdbec" containerName="mariadb-database-create" Dec 10 12:34:48 crc kubenswrapper[4689]: E1210 12:34:48.270905 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1fa6986-f4e1-4b99-8def-9f9017c41cb7" containerName="mariadb-account-create-update" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.270912 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1fa6986-f4e1-4b99-8def-9f9017c41cb7" containerName="mariadb-account-create-update" Dec 10 12:34:48 crc kubenswrapper[4689]: E1210 12:34:48.270922 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10d3bc49-4f1e-4e47-b49c-ebda98a43aa5" containerName="mariadb-database-create" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.270928 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="10d3bc49-4f1e-4e47-b49c-ebda98a43aa5" containerName="mariadb-database-create" Dec 10 12:34:48 crc kubenswrapper[4689]: E1210 12:34:48.270943 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a0a6c5-b397-41af-8c50-6b1c662515e0" containerName="mariadb-account-create-update" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.270962 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a0a6c5-b397-41af-8c50-6b1c662515e0" containerName="mariadb-account-create-update" Dec 10 12:34:48 crc kubenswrapper[4689]: E1210 12:34:48.271015 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e86fae08-d350-4939-a3fb-1131e3f5e5a2" containerName="dnsmasq-dns" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.271022 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="e86fae08-d350-4939-a3fb-1131e3f5e5a2" containerName="dnsmasq-dns" Dec 10 12:34:48 crc kubenswrapper[4689]: E1210 12:34:48.271030 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="006043c5-508c-44c9-9b45-73756a05c173" containerName="mariadb-account-create-update" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.271036 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="006043c5-508c-44c9-9b45-73756a05c173" containerName="mariadb-account-create-update" Dec 10 12:34:48 crc kubenswrapper[4689]: E1210 12:34:48.271050 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36da9d71-f25c-4a4b-96b7-439c4f96bda8" containerName="mariadb-database-create" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.271056 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="36da9d71-f25c-4a4b-96b7-439c4f96bda8" containerName="mariadb-database-create" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.271216 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="006043c5-508c-44c9-9b45-73756a05c173" containerName="mariadb-account-create-update" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.271238 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="10d3bc49-4f1e-4e47-b49c-ebda98a43aa5" containerName="mariadb-database-create" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.271248 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1fa6986-f4e1-4b99-8def-9f9017c41cb7" containerName="mariadb-account-create-update" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.271260 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="e86fae08-d350-4939-a3fb-1131e3f5e5a2" containerName="dnsmasq-dns" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.271274 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="387ed17c-05e1-4311-853d-cae57b4bdbec" containerName="mariadb-database-create" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.271288 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="98a0a6c5-b397-41af-8c50-6b1c662515e0" containerName="mariadb-account-create-update" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.271299 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="36da9d71-f25c-4a4b-96b7-439c4f96bda8" containerName="mariadb-database-create" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.272071 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-jtfkf" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.273863 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.284932 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-jtfkf"] Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.401388 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92c6w\" (UniqueName: \"kubernetes.io/projected/0be1a0e4-d74f-4537-b184-eee8c421275a-kube-api-access-92c6w\") pod \"dnsmasq-dns-7ff5475cc9-jtfkf\" (UID: \"0be1a0e4-d74f-4537-b184-eee8c421275a\") " pod="openstack/dnsmasq-dns-7ff5475cc9-jtfkf" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.401456 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0be1a0e4-d74f-4537-b184-eee8c421275a-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-jtfkf\" (UID: \"0be1a0e4-d74f-4537-b184-eee8c421275a\") " pod="openstack/dnsmasq-dns-7ff5475cc9-jtfkf" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.401512 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0be1a0e4-d74f-4537-b184-eee8c421275a-config\") pod \"dnsmasq-dns-7ff5475cc9-jtfkf\" (UID: \"0be1a0e4-d74f-4537-b184-eee8c421275a\") " pod="openstack/dnsmasq-dns-7ff5475cc9-jtfkf" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.401536 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0be1a0e4-d74f-4537-b184-eee8c421275a-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-jtfkf\" (UID: \"0be1a0e4-d74f-4537-b184-eee8c421275a\") " pod="openstack/dnsmasq-dns-7ff5475cc9-jtfkf" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.401613 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0be1a0e4-d74f-4537-b184-eee8c421275a-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-jtfkf\" (UID: \"0be1a0e4-d74f-4537-b184-eee8c421275a\") " pod="openstack/dnsmasq-dns-7ff5475cc9-jtfkf" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.401696 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0be1a0e4-d74f-4537-b184-eee8c421275a-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-jtfkf\" (UID: \"0be1a0e4-d74f-4537-b184-eee8c421275a\") " pod="openstack/dnsmasq-dns-7ff5475cc9-jtfkf" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.503580 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0be1a0e4-d74f-4537-b184-eee8c421275a-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-jtfkf\" (UID: \"0be1a0e4-d74f-4537-b184-eee8c421275a\") " pod="openstack/dnsmasq-dns-7ff5475cc9-jtfkf" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.503676 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92c6w\" (UniqueName: \"kubernetes.io/projected/0be1a0e4-d74f-4537-b184-eee8c421275a-kube-api-access-92c6w\") pod \"dnsmasq-dns-7ff5475cc9-jtfkf\" (UID: \"0be1a0e4-d74f-4537-b184-eee8c421275a\") " pod="openstack/dnsmasq-dns-7ff5475cc9-jtfkf" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.503737 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0be1a0e4-d74f-4537-b184-eee8c421275a-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-jtfkf\" (UID: \"0be1a0e4-d74f-4537-b184-eee8c421275a\") " pod="openstack/dnsmasq-dns-7ff5475cc9-jtfkf" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.503822 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0be1a0e4-d74f-4537-b184-eee8c421275a-config\") pod \"dnsmasq-dns-7ff5475cc9-jtfkf\" (UID: \"0be1a0e4-d74f-4537-b184-eee8c421275a\") " pod="openstack/dnsmasq-dns-7ff5475cc9-jtfkf" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.503859 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0be1a0e4-d74f-4537-b184-eee8c421275a-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-jtfkf\" (UID: \"0be1a0e4-d74f-4537-b184-eee8c421275a\") " pod="openstack/dnsmasq-dns-7ff5475cc9-jtfkf" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.503954 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0be1a0e4-d74f-4537-b184-eee8c421275a-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-jtfkf\" (UID: \"0be1a0e4-d74f-4537-b184-eee8c421275a\") " pod="openstack/dnsmasq-dns-7ff5475cc9-jtfkf" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.504610 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0be1a0e4-d74f-4537-b184-eee8c421275a-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-jtfkf\" (UID: \"0be1a0e4-d74f-4537-b184-eee8c421275a\") " pod="openstack/dnsmasq-dns-7ff5475cc9-jtfkf" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.504644 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0be1a0e4-d74f-4537-b184-eee8c421275a-config\") pod \"dnsmasq-dns-7ff5475cc9-jtfkf\" (UID: \"0be1a0e4-d74f-4537-b184-eee8c421275a\") " pod="openstack/dnsmasq-dns-7ff5475cc9-jtfkf" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.504789 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0be1a0e4-d74f-4537-b184-eee8c421275a-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-jtfkf\" (UID: \"0be1a0e4-d74f-4537-b184-eee8c421275a\") " pod="openstack/dnsmasq-dns-7ff5475cc9-jtfkf" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.504916 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0be1a0e4-d74f-4537-b184-eee8c421275a-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-jtfkf\" (UID: \"0be1a0e4-d74f-4537-b184-eee8c421275a\") " pod="openstack/dnsmasq-dns-7ff5475cc9-jtfkf" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.505849 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0be1a0e4-d74f-4537-b184-eee8c421275a-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-jtfkf\" (UID: \"0be1a0e4-d74f-4537-b184-eee8c421275a\") " pod="openstack/dnsmasq-dns-7ff5475cc9-jtfkf" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.542806 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92c6w\" (UniqueName: \"kubernetes.io/projected/0be1a0e4-d74f-4537-b184-eee8c421275a-kube-api-access-92c6w\") pod \"dnsmasq-dns-7ff5475cc9-jtfkf\" (UID: \"0be1a0e4-d74f-4537-b184-eee8c421275a\") " pod="openstack/dnsmasq-dns-7ff5475cc9-jtfkf" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.618418 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-jtfkf" Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.958777 4689 generic.go:334] "Generic (PLEG): container finished" podID="b8336932-b518-4fd5-8a96-895454de5855" containerID="22dc2abd908f65340ee1be610486603b9c5c5f41097bb4b1aa066b4b65657941" exitCode=0 Dec 10 12:34:48 crc kubenswrapper[4689]: I1210 12:34:48.958882 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-brgtz" event={"ID":"b8336932-b518-4fd5-8a96-895454de5855","Type":"ContainerDied","Data":"22dc2abd908f65340ee1be610486603b9c5c5f41097bb4b1aa066b4b65657941"} Dec 10 12:34:49 crc kubenswrapper[4689]: I1210 12:34:49.056480 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-jtfkf"] Dec 10 12:34:49 crc kubenswrapper[4689]: W1210 12:34:49.061424 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0be1a0e4_d74f_4537_b184_eee8c421275a.slice/crio-0414902e800164d1ebbadd81a059ca6d2bf9e3a052d01817052960f4902bc291 WatchSource:0}: Error finding container 0414902e800164d1ebbadd81a059ca6d2bf9e3a052d01817052960f4902bc291: Status 404 returned error can't find the container with id 0414902e800164d1ebbadd81a059ca6d2bf9e3a052d01817052960f4902bc291 Dec 10 12:34:49 crc kubenswrapper[4689]: I1210 12:34:49.969997 4689 generic.go:334] "Generic (PLEG): container finished" podID="0be1a0e4-d74f-4537-b184-eee8c421275a" containerID="c0443236043802f655ef1e16d76fdbe91c805da50d07ef5dbcc5b08af37d2956" exitCode=0 Dec 10 12:34:49 crc kubenswrapper[4689]: I1210 12:34:49.970084 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-jtfkf" event={"ID":"0be1a0e4-d74f-4537-b184-eee8c421275a","Type":"ContainerDied","Data":"c0443236043802f655ef1e16d76fdbe91c805da50d07ef5dbcc5b08af37d2956"} Dec 10 12:34:49 crc kubenswrapper[4689]: I1210 12:34:49.970902 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-jtfkf" event={"ID":"0be1a0e4-d74f-4537-b184-eee8c421275a","Type":"ContainerStarted","Data":"0414902e800164d1ebbadd81a059ca6d2bf9e3a052d01817052960f4902bc291"} Dec 10 12:34:50 crc kubenswrapper[4689]: I1210 12:34:50.227451 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-brgtz" Dec 10 12:34:50 crc kubenswrapper[4689]: I1210 12:34:50.337483 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8336932-b518-4fd5-8a96-895454de5855-config-data\") pod \"b8336932-b518-4fd5-8a96-895454de5855\" (UID: \"b8336932-b518-4fd5-8a96-895454de5855\") " Dec 10 12:34:50 crc kubenswrapper[4689]: I1210 12:34:50.337522 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59tmz\" (UniqueName: \"kubernetes.io/projected/b8336932-b518-4fd5-8a96-895454de5855-kube-api-access-59tmz\") pod \"b8336932-b518-4fd5-8a96-895454de5855\" (UID: \"b8336932-b518-4fd5-8a96-895454de5855\") " Dec 10 12:34:50 crc kubenswrapper[4689]: I1210 12:34:50.337580 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8336932-b518-4fd5-8a96-895454de5855-combined-ca-bundle\") pod \"b8336932-b518-4fd5-8a96-895454de5855\" (UID: \"b8336932-b518-4fd5-8a96-895454de5855\") " Dec 10 12:34:50 crc kubenswrapper[4689]: I1210 12:34:50.342355 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8336932-b518-4fd5-8a96-895454de5855-kube-api-access-59tmz" (OuterVolumeSpecName: "kube-api-access-59tmz") pod "b8336932-b518-4fd5-8a96-895454de5855" (UID: "b8336932-b518-4fd5-8a96-895454de5855"). InnerVolumeSpecName "kube-api-access-59tmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:34:50 crc kubenswrapper[4689]: I1210 12:34:50.358272 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8336932-b518-4fd5-8a96-895454de5855-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8336932-b518-4fd5-8a96-895454de5855" (UID: "b8336932-b518-4fd5-8a96-895454de5855"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:34:50 crc kubenswrapper[4689]: I1210 12:34:50.392358 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8336932-b518-4fd5-8a96-895454de5855-config-data" (OuterVolumeSpecName: "config-data") pod "b8336932-b518-4fd5-8a96-895454de5855" (UID: "b8336932-b518-4fd5-8a96-895454de5855"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:34:50 crc kubenswrapper[4689]: I1210 12:34:50.439220 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8336932-b518-4fd5-8a96-895454de5855-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:50 crc kubenswrapper[4689]: I1210 12:34:50.439260 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59tmz\" (UniqueName: \"kubernetes.io/projected/b8336932-b518-4fd5-8a96-895454de5855-kube-api-access-59tmz\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:50 crc kubenswrapper[4689]: I1210 12:34:50.439275 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8336932-b518-4fd5-8a96-895454de5855-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:50 crc kubenswrapper[4689]: I1210 12:34:50.981411 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-brgtz" event={"ID":"b8336932-b518-4fd5-8a96-895454de5855","Type":"ContainerDied","Data":"ca82addec8fc7d9d18de12d64bf0986bea3a21278f23c6408318d0d454ce2faa"} Dec 10 12:34:50 crc kubenswrapper[4689]: I1210 12:34:50.981795 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca82addec8fc7d9d18de12d64bf0986bea3a21278f23c6408318d0d454ce2faa" Dec 10 12:34:50 crc kubenswrapper[4689]: I1210 12:34:50.981450 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-brgtz" Dec 10 12:34:50 crc kubenswrapper[4689]: I1210 12:34:50.984196 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-jtfkf" event={"ID":"0be1a0e4-d74f-4537-b184-eee8c421275a","Type":"ContainerStarted","Data":"4d0f79cba7a73e92e89e8639f599c41ce98f73a5cbd62f148dba2796155b00e2"} Dec 10 12:34:50 crc kubenswrapper[4689]: I1210 12:34:50.984449 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7ff5475cc9-jtfkf" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.023747 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7ff5475cc9-jtfkf" podStartSLOduration=3.023721484 podStartE2EDuration="3.023721484s" podCreationTimestamp="2025-12-10 12:34:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:34:51.008301101 +0000 UTC m=+1158.796382239" watchObservedRunningTime="2025-12-10 12:34:51.023721484 +0000 UTC m=+1158.811802642" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.158335 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-jtfkf"] Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.182681 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-w7tlj"] Dec 10 12:34:51 crc kubenswrapper[4689]: E1210 12:34:51.186336 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8336932-b518-4fd5-8a96-895454de5855" containerName="keystone-db-sync" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.186374 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8336932-b518-4fd5-8a96-895454de5855" containerName="keystone-db-sync" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.186615 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8336932-b518-4fd5-8a96-895454de5855" containerName="keystone-db-sync" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.187304 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-w7tlj" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.192479 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.192724 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.195230 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.195240 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.212766 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-w7tlj"] Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.247776 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5qbhw" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.261881 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-7z4m6"] Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.263654 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-7z4m6" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.298965 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-7z4m6"] Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.353958 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdd4ece3-0d47-424e-8408-51f9fce3af5d-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-7z4m6\" (UID: \"cdd4ece3-0d47-424e-8408-51f9fce3af5d\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-7z4m6" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.354223 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/048101b4-0993-42cb-bac6-d9384a242856-fernet-keys\") pod \"keystone-bootstrap-w7tlj\" (UID: \"048101b4-0993-42cb-bac6-d9384a242856\") " pod="openstack/keystone-bootstrap-w7tlj" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.354309 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd4ece3-0d47-424e-8408-51f9fce3af5d-config\") pod \"dnsmasq-dns-5c5cc7c5ff-7z4m6\" (UID: \"cdd4ece3-0d47-424e-8408-51f9fce3af5d\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-7z4m6" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.354384 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/048101b4-0993-42cb-bac6-d9384a242856-combined-ca-bundle\") pod \"keystone-bootstrap-w7tlj\" (UID: \"048101b4-0993-42cb-bac6-d9384a242856\") " pod="openstack/keystone-bootstrap-w7tlj" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.354477 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/048101b4-0993-42cb-bac6-d9384a242856-config-data\") pod \"keystone-bootstrap-w7tlj\" (UID: \"048101b4-0993-42cb-bac6-d9384a242856\") " pod="openstack/keystone-bootstrap-w7tlj" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.354553 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/048101b4-0993-42cb-bac6-d9384a242856-credential-keys\") pod \"keystone-bootstrap-w7tlj\" (UID: \"048101b4-0993-42cb-bac6-d9384a242856\") " pod="openstack/keystone-bootstrap-w7tlj" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.354629 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdd4ece3-0d47-424e-8408-51f9fce3af5d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-7z4m6\" (UID: \"cdd4ece3-0d47-424e-8408-51f9fce3af5d\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-7z4m6" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.354713 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cdd4ece3-0d47-424e-8408-51f9fce3af5d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-7z4m6\" (UID: \"cdd4ece3-0d47-424e-8408-51f9fce3af5d\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-7z4m6" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.354804 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5csm\" (UniqueName: \"kubernetes.io/projected/048101b4-0993-42cb-bac6-d9384a242856-kube-api-access-b5csm\") pod \"keystone-bootstrap-w7tlj\" (UID: \"048101b4-0993-42cb-bac6-d9384a242856\") " pod="openstack/keystone-bootstrap-w7tlj" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.354889 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdd4ece3-0d47-424e-8408-51f9fce3af5d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-7z4m6\" (UID: \"cdd4ece3-0d47-424e-8408-51f9fce3af5d\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-7z4m6" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.354956 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcm9h\" (UniqueName: \"kubernetes.io/projected/cdd4ece3-0d47-424e-8408-51f9fce3af5d-kube-api-access-rcm9h\") pod \"dnsmasq-dns-5c5cc7c5ff-7z4m6\" (UID: \"cdd4ece3-0d47-424e-8408-51f9fce3af5d\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-7z4m6" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.355055 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/048101b4-0993-42cb-bac6-d9384a242856-scripts\") pod \"keystone-bootstrap-w7tlj\" (UID: \"048101b4-0993-42cb-bac6-d9384a242856\") " pod="openstack/keystone-bootstrap-w7tlj" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.459948 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdd4ece3-0d47-424e-8408-51f9fce3af5d-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-7z4m6\" (UID: \"cdd4ece3-0d47-424e-8408-51f9fce3af5d\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-7z4m6" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.460026 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/048101b4-0993-42cb-bac6-d9384a242856-fernet-keys\") pod \"keystone-bootstrap-w7tlj\" (UID: \"048101b4-0993-42cb-bac6-d9384a242856\") " pod="openstack/keystone-bootstrap-w7tlj" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.460068 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd4ece3-0d47-424e-8408-51f9fce3af5d-config\") pod \"dnsmasq-dns-5c5cc7c5ff-7z4m6\" (UID: \"cdd4ece3-0d47-424e-8408-51f9fce3af5d\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-7z4m6" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.460091 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/048101b4-0993-42cb-bac6-d9384a242856-combined-ca-bundle\") pod \"keystone-bootstrap-w7tlj\" (UID: \"048101b4-0993-42cb-bac6-d9384a242856\") " pod="openstack/keystone-bootstrap-w7tlj" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.460110 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/048101b4-0993-42cb-bac6-d9384a242856-config-data\") pod \"keystone-bootstrap-w7tlj\" (UID: \"048101b4-0993-42cb-bac6-d9384a242856\") " pod="openstack/keystone-bootstrap-w7tlj" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.460126 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/048101b4-0993-42cb-bac6-d9384a242856-credential-keys\") pod \"keystone-bootstrap-w7tlj\" (UID: \"048101b4-0993-42cb-bac6-d9384a242856\") " pod="openstack/keystone-bootstrap-w7tlj" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.460145 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdd4ece3-0d47-424e-8408-51f9fce3af5d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-7z4m6\" (UID: \"cdd4ece3-0d47-424e-8408-51f9fce3af5d\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-7z4m6" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.460175 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cdd4ece3-0d47-424e-8408-51f9fce3af5d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-7z4m6\" (UID: \"cdd4ece3-0d47-424e-8408-51f9fce3af5d\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-7z4m6" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.460206 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5csm\" (UniqueName: \"kubernetes.io/projected/048101b4-0993-42cb-bac6-d9384a242856-kube-api-access-b5csm\") pod \"keystone-bootstrap-w7tlj\" (UID: \"048101b4-0993-42cb-bac6-d9384a242856\") " pod="openstack/keystone-bootstrap-w7tlj" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.460231 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdd4ece3-0d47-424e-8408-51f9fce3af5d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-7z4m6\" (UID: \"cdd4ece3-0d47-424e-8408-51f9fce3af5d\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-7z4m6" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.460247 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcm9h\" (UniqueName: \"kubernetes.io/projected/cdd4ece3-0d47-424e-8408-51f9fce3af5d-kube-api-access-rcm9h\") pod \"dnsmasq-dns-5c5cc7c5ff-7z4m6\" (UID: \"cdd4ece3-0d47-424e-8408-51f9fce3af5d\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-7z4m6" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.460266 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/048101b4-0993-42cb-bac6-d9384a242856-scripts\") pod \"keystone-bootstrap-w7tlj\" (UID: \"048101b4-0993-42cb-bac6-d9384a242856\") " pod="openstack/keystone-bootstrap-w7tlj" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.465695 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cdd4ece3-0d47-424e-8408-51f9fce3af5d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-7z4m6\" (UID: \"cdd4ece3-0d47-424e-8408-51f9fce3af5d\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-7z4m6" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.476872 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdd4ece3-0d47-424e-8408-51f9fce3af5d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-7z4m6\" (UID: \"cdd4ece3-0d47-424e-8408-51f9fce3af5d\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-7z4m6" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.479599 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd4ece3-0d47-424e-8408-51f9fce3af5d-config\") pod \"dnsmasq-dns-5c5cc7c5ff-7z4m6\" (UID: \"cdd4ece3-0d47-424e-8408-51f9fce3af5d\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-7z4m6" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.481837 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdd4ece3-0d47-424e-8408-51f9fce3af5d-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-7z4m6\" (UID: \"cdd4ece3-0d47-424e-8408-51f9fce3af5d\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-7z4m6" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.482642 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/048101b4-0993-42cb-bac6-d9384a242856-config-data\") pod \"keystone-bootstrap-w7tlj\" (UID: \"048101b4-0993-42cb-bac6-d9384a242856\") " pod="openstack/keystone-bootstrap-w7tlj" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.483468 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdd4ece3-0d47-424e-8408-51f9fce3af5d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-7z4m6\" (UID: \"cdd4ece3-0d47-424e-8408-51f9fce3af5d\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-7z4m6" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.486861 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/048101b4-0993-42cb-bac6-d9384a242856-credential-keys\") pod \"keystone-bootstrap-w7tlj\" (UID: \"048101b4-0993-42cb-bac6-d9384a242856\") " pod="openstack/keystone-bootstrap-w7tlj" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.487365 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/048101b4-0993-42cb-bac6-d9384a242856-fernet-keys\") pod \"keystone-bootstrap-w7tlj\" (UID: \"048101b4-0993-42cb-bac6-d9384a242856\") " pod="openstack/keystone-bootstrap-w7tlj" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.487544 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/048101b4-0993-42cb-bac6-d9384a242856-scripts\") pod \"keystone-bootstrap-w7tlj\" (UID: \"048101b4-0993-42cb-bac6-d9384a242856\") " pod="openstack/keystone-bootstrap-w7tlj" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.494635 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/048101b4-0993-42cb-bac6-d9384a242856-combined-ca-bundle\") pod \"keystone-bootstrap-w7tlj\" (UID: \"048101b4-0993-42cb-bac6-d9384a242856\") " pod="openstack/keystone-bootstrap-w7tlj" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.545623 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5csm\" (UniqueName: \"kubernetes.io/projected/048101b4-0993-42cb-bac6-d9384a242856-kube-api-access-b5csm\") pod \"keystone-bootstrap-w7tlj\" (UID: \"048101b4-0993-42cb-bac6-d9384a242856\") " pod="openstack/keystone-bootstrap-w7tlj" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.546110 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-create-c5fbw"] Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.556266 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-c5fbw" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.557294 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-c5fbw"] Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.560725 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcm9h\" (UniqueName: \"kubernetes.io/projected/cdd4ece3-0d47-424e-8408-51f9fce3af5d-kube-api-access-rcm9h\") pod \"dnsmasq-dns-5c5cc7c5ff-7z4m6\" (UID: \"cdd4ece3-0d47-424e-8408-51f9fce3af5d\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-7z4m6" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.615491 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-7z4m6" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.633164 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-57ffb"] Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.634281 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-57ffb" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.656486 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-jlpdr" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.656808 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.656916 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.662813 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgtxt\" (UniqueName: \"kubernetes.io/projected/9880fdc4-4a6b-4353-9b05-fefd96248c09-kube-api-access-lgtxt\") pod \"ironic-db-create-c5fbw\" (UID: \"9880fdc4-4a6b-4353-9b05-fefd96248c09\") " pod="openstack/ironic-db-create-c5fbw" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.662870 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9880fdc4-4a6b-4353-9b05-fefd96248c09-operator-scripts\") pod \"ironic-db-create-c5fbw\" (UID: \"9880fdc4-4a6b-4353-9b05-fefd96248c09\") " pod="openstack/ironic-db-create-c5fbw" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.687361 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-32f3-account-create-update-qwdgk"] Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.688716 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-32f3-account-create-update-qwdgk" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.692378 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-db-secret" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.711043 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-57ffb"] Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.726033 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-7xjms"] Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.727093 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7xjms" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.733298 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-t4vth" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.738623 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.764920 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb62506f-d5a5-44b0-8da3-125128211e10-combined-ca-bundle\") pod \"cinder-db-sync-57ffb\" (UID: \"eb62506f-d5a5-44b0-8da3-125128211e10\") " pod="openstack/cinder-db-sync-57ffb" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.765020 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgtxt\" (UniqueName: \"kubernetes.io/projected/9880fdc4-4a6b-4353-9b05-fefd96248c09-kube-api-access-lgtxt\") pod \"ironic-db-create-c5fbw\" (UID: \"9880fdc4-4a6b-4353-9b05-fefd96248c09\") " pod="openstack/ironic-db-create-c5fbw" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.765069 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m4lp\" (UniqueName: \"kubernetes.io/projected/0a323df3-5a3b-41cb-afc0-cbd2e4933ec0-kube-api-access-2m4lp\") pod \"ironic-32f3-account-create-update-qwdgk\" (UID: \"0a323df3-5a3b-41cb-afc0-cbd2e4933ec0\") " pod="openstack/ironic-32f3-account-create-update-qwdgk" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.765090 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9880fdc4-4a6b-4353-9b05-fefd96248c09-operator-scripts\") pod \"ironic-db-create-c5fbw\" (UID: \"9880fdc4-4a6b-4353-9b05-fefd96248c09\") " pod="openstack/ironic-db-create-c5fbw" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.765121 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb62506f-d5a5-44b0-8da3-125128211e10-config-data\") pod \"cinder-db-sync-57ffb\" (UID: \"eb62506f-d5a5-44b0-8da3-125128211e10\") " pod="openstack/cinder-db-sync-57ffb" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.765137 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb62506f-d5a5-44b0-8da3-125128211e10-etc-machine-id\") pod \"cinder-db-sync-57ffb\" (UID: \"eb62506f-d5a5-44b0-8da3-125128211e10\") " pod="openstack/cinder-db-sync-57ffb" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.765159 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb62506f-d5a5-44b0-8da3-125128211e10-scripts\") pod \"cinder-db-sync-57ffb\" (UID: \"eb62506f-d5a5-44b0-8da3-125128211e10\") " pod="openstack/cinder-db-sync-57ffb" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.765188 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt4h9\" (UniqueName: \"kubernetes.io/projected/eb62506f-d5a5-44b0-8da3-125128211e10-kube-api-access-lt4h9\") pod \"cinder-db-sync-57ffb\" (UID: \"eb62506f-d5a5-44b0-8da3-125128211e10\") " pod="openstack/cinder-db-sync-57ffb" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.765217 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb62506f-d5a5-44b0-8da3-125128211e10-db-sync-config-data\") pod \"cinder-db-sync-57ffb\" (UID: \"eb62506f-d5a5-44b0-8da3-125128211e10\") " pod="openstack/cinder-db-sync-57ffb" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.765242 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a323df3-5a3b-41cb-afc0-cbd2e4933ec0-operator-scripts\") pod \"ironic-32f3-account-create-update-qwdgk\" (UID: \"0a323df3-5a3b-41cb-afc0-cbd2e4933ec0\") " pod="openstack/ironic-32f3-account-create-update-qwdgk" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.767322 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-7xjms"] Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.776258 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9880fdc4-4a6b-4353-9b05-fefd96248c09-operator-scripts\") pod \"ironic-db-create-c5fbw\" (UID: \"9880fdc4-4a6b-4353-9b05-fefd96248c09\") " pod="openstack/ironic-db-create-c5fbw" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.782042 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-32f3-account-create-update-qwdgk"] Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.798029 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-kb47t"] Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.799107 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kb47t" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.807231 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.807400 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-kb47t"] Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.808141 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.813599 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgtxt\" (UniqueName: \"kubernetes.io/projected/9880fdc4-4a6b-4353-9b05-fefd96248c09-kube-api-access-lgtxt\") pod \"ironic-db-create-c5fbw\" (UID: \"9880fdc4-4a6b-4353-9b05-fefd96248c09\") " pod="openstack/ironic-db-create-c5fbw" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.819314 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-dkrw7"] Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.821555 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dkrw7" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.827734 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-w7tlj" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.851794 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-7z4m6"] Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.857688 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-hlp4d" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.858329 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.858545 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.858932 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-ck5kx" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.868365 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ae5c2d84-08cd-462f-b8d5-ba416353f365-db-sync-config-data\") pod \"barbican-db-sync-7xjms\" (UID: \"ae5c2d84-08cd-462f-b8d5-ba416353f365\") " pod="openstack/barbican-db-sync-7xjms" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.868414 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a323df3-5a3b-41cb-afc0-cbd2e4933ec0-operator-scripts\") pod \"ironic-32f3-account-create-update-qwdgk\" (UID: \"0a323df3-5a3b-41cb-afc0-cbd2e4933ec0\") " pod="openstack/ironic-32f3-account-create-update-qwdgk" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.868432 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk2g9\" (UniqueName: \"kubernetes.io/projected/e10d322a-3fb7-451d-9a38-f2659e3d32e5-kube-api-access-qk2g9\") pod \"placement-db-sync-dkrw7\" (UID: \"e10d322a-3fb7-451d-9a38-f2659e3d32e5\") " pod="openstack/placement-db-sync-dkrw7" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.868457 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e10d322a-3fb7-451d-9a38-f2659e3d32e5-config-data\") pod \"placement-db-sync-dkrw7\" (UID: \"e10d322a-3fb7-451d-9a38-f2659e3d32e5\") " pod="openstack/placement-db-sync-dkrw7" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.868496 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb62506f-d5a5-44b0-8da3-125128211e10-combined-ca-bundle\") pod \"cinder-db-sync-57ffb\" (UID: \"eb62506f-d5a5-44b0-8da3-125128211e10\") " pod="openstack/cinder-db-sync-57ffb" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.868524 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae5c2d84-08cd-462f-b8d5-ba416353f365-combined-ca-bundle\") pod \"barbican-db-sync-7xjms\" (UID: \"ae5c2d84-08cd-462f-b8d5-ba416353f365\") " pod="openstack/barbican-db-sync-7xjms" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.868553 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m4lp\" (UniqueName: \"kubernetes.io/projected/0a323df3-5a3b-41cb-afc0-cbd2e4933ec0-kube-api-access-2m4lp\") pod \"ironic-32f3-account-create-update-qwdgk\" (UID: \"0a323df3-5a3b-41cb-afc0-cbd2e4933ec0\") " pod="openstack/ironic-32f3-account-create-update-qwdgk" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.868576 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrqxh\" (UniqueName: \"kubernetes.io/projected/79d420d1-6ba7-4cf2-9e13-b046a65d378c-kube-api-access-hrqxh\") pod \"neutron-db-sync-kb47t\" (UID: \"79d420d1-6ba7-4cf2-9e13-b046a65d378c\") " pod="openstack/neutron-db-sync-kb47t" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.868600 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/79d420d1-6ba7-4cf2-9e13-b046a65d378c-config\") pod \"neutron-db-sync-kb47t\" (UID: \"79d420d1-6ba7-4cf2-9e13-b046a65d378c\") " pod="openstack/neutron-db-sync-kb47t" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.868616 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e10d322a-3fb7-451d-9a38-f2659e3d32e5-scripts\") pod \"placement-db-sync-dkrw7\" (UID: \"e10d322a-3fb7-451d-9a38-f2659e3d32e5\") " pod="openstack/placement-db-sync-dkrw7" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.868636 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb62506f-d5a5-44b0-8da3-125128211e10-config-data\") pod \"cinder-db-sync-57ffb\" (UID: \"eb62506f-d5a5-44b0-8da3-125128211e10\") " pod="openstack/cinder-db-sync-57ffb" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.868653 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb62506f-d5a5-44b0-8da3-125128211e10-etc-machine-id\") pod \"cinder-db-sync-57ffb\" (UID: \"eb62506f-d5a5-44b0-8da3-125128211e10\") " pod="openstack/cinder-db-sync-57ffb" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.868677 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb62506f-d5a5-44b0-8da3-125128211e10-scripts\") pod \"cinder-db-sync-57ffb\" (UID: \"eb62506f-d5a5-44b0-8da3-125128211e10\") " pod="openstack/cinder-db-sync-57ffb" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.868695 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e10d322a-3fb7-451d-9a38-f2659e3d32e5-combined-ca-bundle\") pod \"placement-db-sync-dkrw7\" (UID: \"e10d322a-3fb7-451d-9a38-f2659e3d32e5\") " pod="openstack/placement-db-sync-dkrw7" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.868722 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt4h9\" (UniqueName: \"kubernetes.io/projected/eb62506f-d5a5-44b0-8da3-125128211e10-kube-api-access-lt4h9\") pod \"cinder-db-sync-57ffb\" (UID: \"eb62506f-d5a5-44b0-8da3-125128211e10\") " pod="openstack/cinder-db-sync-57ffb" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.868747 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t2p7\" (UniqueName: \"kubernetes.io/projected/ae5c2d84-08cd-462f-b8d5-ba416353f365-kube-api-access-6t2p7\") pod \"barbican-db-sync-7xjms\" (UID: \"ae5c2d84-08cd-462f-b8d5-ba416353f365\") " pod="openstack/barbican-db-sync-7xjms" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.868766 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d420d1-6ba7-4cf2-9e13-b046a65d378c-combined-ca-bundle\") pod \"neutron-db-sync-kb47t\" (UID: \"79d420d1-6ba7-4cf2-9e13-b046a65d378c\") " pod="openstack/neutron-db-sync-kb47t" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.868785 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb62506f-d5a5-44b0-8da3-125128211e10-db-sync-config-data\") pod \"cinder-db-sync-57ffb\" (UID: \"eb62506f-d5a5-44b0-8da3-125128211e10\") " pod="openstack/cinder-db-sync-57ffb" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.868803 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e10d322a-3fb7-451d-9a38-f2659e3d32e5-logs\") pod \"placement-db-sync-dkrw7\" (UID: \"e10d322a-3fb7-451d-9a38-f2659e3d32e5\") " pod="openstack/placement-db-sync-dkrw7" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.869584 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a323df3-5a3b-41cb-afc0-cbd2e4933ec0-operator-scripts\") pod \"ironic-32f3-account-create-update-qwdgk\" (UID: \"0a323df3-5a3b-41cb-afc0-cbd2e4933ec0\") " pod="openstack/ironic-32f3-account-create-update-qwdgk" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.874859 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb62506f-d5a5-44b0-8da3-125128211e10-etc-machine-id\") pod \"cinder-db-sync-57ffb\" (UID: \"eb62506f-d5a5-44b0-8da3-125128211e10\") " pod="openstack/cinder-db-sync-57ffb" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.893707 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb62506f-d5a5-44b0-8da3-125128211e10-config-data\") pod \"cinder-db-sync-57ffb\" (UID: \"eb62506f-d5a5-44b0-8da3-125128211e10\") " pod="openstack/cinder-db-sync-57ffb" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.902697 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb62506f-d5a5-44b0-8da3-125128211e10-combined-ca-bundle\") pod \"cinder-db-sync-57ffb\" (UID: \"eb62506f-d5a5-44b0-8da3-125128211e10\") " pod="openstack/cinder-db-sync-57ffb" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.909449 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb62506f-d5a5-44b0-8da3-125128211e10-scripts\") pod \"cinder-db-sync-57ffb\" (UID: \"eb62506f-d5a5-44b0-8da3-125128211e10\") " pod="openstack/cinder-db-sync-57ffb" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.926732 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m4lp\" (UniqueName: \"kubernetes.io/projected/0a323df3-5a3b-41cb-afc0-cbd2e4933ec0-kube-api-access-2m4lp\") pod \"ironic-32f3-account-create-update-qwdgk\" (UID: \"0a323df3-5a3b-41cb-afc0-cbd2e4933ec0\") " pod="openstack/ironic-32f3-account-create-update-qwdgk" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.928694 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb62506f-d5a5-44b0-8da3-125128211e10-db-sync-config-data\") pod \"cinder-db-sync-57ffb\" (UID: \"eb62506f-d5a5-44b0-8da3-125128211e10\") " pod="openstack/cinder-db-sync-57ffb" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.931491 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.933445 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.942427 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.942617 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.943200 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt4h9\" (UniqueName: \"kubernetes.io/projected/eb62506f-d5a5-44b0-8da3-125128211e10-kube-api-access-lt4h9\") pod \"cinder-db-sync-57ffb\" (UID: \"eb62506f-d5a5-44b0-8da3-125128211e10\") " pod="openstack/cinder-db-sync-57ffb" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.964140 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-dkrw7"] Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.970079 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t2p7\" (UniqueName: \"kubernetes.io/projected/ae5c2d84-08cd-462f-b8d5-ba416353f365-kube-api-access-6t2p7\") pod \"barbican-db-sync-7xjms\" (UID: \"ae5c2d84-08cd-462f-b8d5-ba416353f365\") " pod="openstack/barbican-db-sync-7xjms" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.970121 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d420d1-6ba7-4cf2-9e13-b046a65d378c-combined-ca-bundle\") pod \"neutron-db-sync-kb47t\" (UID: \"79d420d1-6ba7-4cf2-9e13-b046a65d378c\") " pod="openstack/neutron-db-sync-kb47t" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.970143 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e10d322a-3fb7-451d-9a38-f2659e3d32e5-logs\") pod \"placement-db-sync-dkrw7\" (UID: \"e10d322a-3fb7-451d-9a38-f2659e3d32e5\") " pod="openstack/placement-db-sync-dkrw7" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.970165 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ae5c2d84-08cd-462f-b8d5-ba416353f365-db-sync-config-data\") pod \"barbican-db-sync-7xjms\" (UID: \"ae5c2d84-08cd-462f-b8d5-ba416353f365\") " pod="openstack/barbican-db-sync-7xjms" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.970187 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk2g9\" (UniqueName: \"kubernetes.io/projected/e10d322a-3fb7-451d-9a38-f2659e3d32e5-kube-api-access-qk2g9\") pod \"placement-db-sync-dkrw7\" (UID: \"e10d322a-3fb7-451d-9a38-f2659e3d32e5\") " pod="openstack/placement-db-sync-dkrw7" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.970213 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e10d322a-3fb7-451d-9a38-f2659e3d32e5-config-data\") pod \"placement-db-sync-dkrw7\" (UID: \"e10d322a-3fb7-451d-9a38-f2659e3d32e5\") " pod="openstack/placement-db-sync-dkrw7" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.970258 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae5c2d84-08cd-462f-b8d5-ba416353f365-combined-ca-bundle\") pod \"barbican-db-sync-7xjms\" (UID: \"ae5c2d84-08cd-462f-b8d5-ba416353f365\") " pod="openstack/barbican-db-sync-7xjms" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.970290 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrqxh\" (UniqueName: \"kubernetes.io/projected/79d420d1-6ba7-4cf2-9e13-b046a65d378c-kube-api-access-hrqxh\") pod \"neutron-db-sync-kb47t\" (UID: \"79d420d1-6ba7-4cf2-9e13-b046a65d378c\") " pod="openstack/neutron-db-sync-kb47t" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.970311 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/79d420d1-6ba7-4cf2-9e13-b046a65d378c-config\") pod \"neutron-db-sync-kb47t\" (UID: \"79d420d1-6ba7-4cf2-9e13-b046a65d378c\") " pod="openstack/neutron-db-sync-kb47t" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.970325 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e10d322a-3fb7-451d-9a38-f2659e3d32e5-scripts\") pod \"placement-db-sync-dkrw7\" (UID: \"e10d322a-3fb7-451d-9a38-f2659e3d32e5\") " pod="openstack/placement-db-sync-dkrw7" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.970358 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e10d322a-3fb7-451d-9a38-f2659e3d32e5-combined-ca-bundle\") pod \"placement-db-sync-dkrw7\" (UID: \"e10d322a-3fb7-451d-9a38-f2659e3d32e5\") " pod="openstack/placement-db-sync-dkrw7" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.971362 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e10d322a-3fb7-451d-9a38-f2659e3d32e5-logs\") pod \"placement-db-sync-dkrw7\" (UID: \"e10d322a-3fb7-451d-9a38-f2659e3d32e5\") " pod="openstack/placement-db-sync-dkrw7" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.973881 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e10d322a-3fb7-451d-9a38-f2659e3d32e5-combined-ca-bundle\") pod \"placement-db-sync-dkrw7\" (UID: \"e10d322a-3fb7-451d-9a38-f2659e3d32e5\") " pod="openstack/placement-db-sync-dkrw7" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.976031 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-w7sjh"] Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.976654 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e10d322a-3fb7-451d-9a38-f2659e3d32e5-config-data\") pod \"placement-db-sync-dkrw7\" (UID: \"e10d322a-3fb7-451d-9a38-f2659e3d32e5\") " pod="openstack/placement-db-sync-dkrw7" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.981179 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-w7sjh" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.982097 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ae5c2d84-08cd-462f-b8d5-ba416353f365-db-sync-config-data\") pod \"barbican-db-sync-7xjms\" (UID: \"ae5c2d84-08cd-462f-b8d5-ba416353f365\") " pod="openstack/barbican-db-sync-7xjms" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.983710 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae5c2d84-08cd-462f-b8d5-ba416353f365-combined-ca-bundle\") pod \"barbican-db-sync-7xjms\" (UID: \"ae5c2d84-08cd-462f-b8d5-ba416353f365\") " pod="openstack/barbican-db-sync-7xjms" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.984084 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/79d420d1-6ba7-4cf2-9e13-b046a65d378c-config\") pod \"neutron-db-sync-kb47t\" (UID: \"79d420d1-6ba7-4cf2-9e13-b046a65d378c\") " pod="openstack/neutron-db-sync-kb47t" Dec 10 12:34:51 crc kubenswrapper[4689]: I1210 12:34:51.996426 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.007884 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e10d322a-3fb7-451d-9a38-f2659e3d32e5-scripts\") pod \"placement-db-sync-dkrw7\" (UID: \"e10d322a-3fb7-451d-9a38-f2659e3d32e5\") " pod="openstack/placement-db-sync-dkrw7" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.008784 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrqxh\" (UniqueName: \"kubernetes.io/projected/79d420d1-6ba7-4cf2-9e13-b046a65d378c-kube-api-access-hrqxh\") pod \"neutron-db-sync-kb47t\" (UID: \"79d420d1-6ba7-4cf2-9e13-b046a65d378c\") " pod="openstack/neutron-db-sync-kb47t" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.009763 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t2p7\" (UniqueName: \"kubernetes.io/projected/ae5c2d84-08cd-462f-b8d5-ba416353f365-kube-api-access-6t2p7\") pod \"barbican-db-sync-7xjms\" (UID: \"ae5c2d84-08cd-462f-b8d5-ba416353f365\") " pod="openstack/barbican-db-sync-7xjms" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.009987 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d420d1-6ba7-4cf2-9e13-b046a65d378c-combined-ca-bundle\") pod \"neutron-db-sync-kb47t\" (UID: \"79d420d1-6ba7-4cf2-9e13-b046a65d378c\") " pod="openstack/neutron-db-sync-kb47t" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.011753 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-w7sjh"] Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.011903 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk2g9\" (UniqueName: \"kubernetes.io/projected/e10d322a-3fb7-451d-9a38-f2659e3d32e5-kube-api-access-qk2g9\") pod \"placement-db-sync-dkrw7\" (UID: \"e10d322a-3fb7-451d-9a38-f2659e3d32e5\") " pod="openstack/placement-db-sync-dkrw7" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.014122 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-c5fbw" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.027535 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-57ffb" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.055844 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-32f3-account-create-update-qwdgk" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.072210 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\") " pod="openstack/ceilometer-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.072246 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvxgw\" (UniqueName: \"kubernetes.io/projected/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-kube-api-access-hvxgw\") pod \"ceilometer-0\" (UID: \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\") " pod="openstack/ceilometer-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.072302 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/edfb6012-77c8-4a57-b217-19089c4a9d17-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-w7sjh\" (UID: \"edfb6012-77c8-4a57-b217-19089c4a9d17\") " pod="openstack/dnsmasq-dns-8b5c85b87-w7sjh" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.072325 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\") " pod="openstack/ceilometer-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.072340 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-run-httpd\") pod \"ceilometer-0\" (UID: \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\") " pod="openstack/ceilometer-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.072359 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-config-data\") pod \"ceilometer-0\" (UID: \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\") " pod="openstack/ceilometer-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.072392 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-scripts\") pod \"ceilometer-0\" (UID: \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\") " pod="openstack/ceilometer-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.072415 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/edfb6012-77c8-4a57-b217-19089c4a9d17-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-w7sjh\" (UID: \"edfb6012-77c8-4a57-b217-19089c4a9d17\") " pod="openstack/dnsmasq-dns-8b5c85b87-w7sjh" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.072465 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-log-httpd\") pod \"ceilometer-0\" (UID: \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\") " pod="openstack/ceilometer-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.072494 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edfb6012-77c8-4a57-b217-19089c4a9d17-config\") pod \"dnsmasq-dns-8b5c85b87-w7sjh\" (UID: \"edfb6012-77c8-4a57-b217-19089c4a9d17\") " pod="openstack/dnsmasq-dns-8b5c85b87-w7sjh" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.072565 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/edfb6012-77c8-4a57-b217-19089c4a9d17-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-w7sjh\" (UID: \"edfb6012-77c8-4a57-b217-19089c4a9d17\") " pod="openstack/dnsmasq-dns-8b5c85b87-w7sjh" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.074475 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/edfb6012-77c8-4a57-b217-19089c4a9d17-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-w7sjh\" (UID: \"edfb6012-77c8-4a57-b217-19089c4a9d17\") " pod="openstack/dnsmasq-dns-8b5c85b87-w7sjh" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.077378 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smfpg\" (UniqueName: \"kubernetes.io/projected/edfb6012-77c8-4a57-b217-19089c4a9d17-kube-api-access-smfpg\") pod \"dnsmasq-dns-8b5c85b87-w7sjh\" (UID: \"edfb6012-77c8-4a57-b217-19089c4a9d17\") " pod="openstack/dnsmasq-dns-8b5c85b87-w7sjh" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.109114 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7xjms" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.179120 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/edfb6012-77c8-4a57-b217-19089c4a9d17-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-w7sjh\" (UID: \"edfb6012-77c8-4a57-b217-19089c4a9d17\") " pod="openstack/dnsmasq-dns-8b5c85b87-w7sjh" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.179183 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smfpg\" (UniqueName: \"kubernetes.io/projected/edfb6012-77c8-4a57-b217-19089c4a9d17-kube-api-access-smfpg\") pod \"dnsmasq-dns-8b5c85b87-w7sjh\" (UID: \"edfb6012-77c8-4a57-b217-19089c4a9d17\") " pod="openstack/dnsmasq-dns-8b5c85b87-w7sjh" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.179249 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\") " pod="openstack/ceilometer-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.179271 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvxgw\" (UniqueName: \"kubernetes.io/projected/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-kube-api-access-hvxgw\") pod \"ceilometer-0\" (UID: \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\") " pod="openstack/ceilometer-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.179314 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/edfb6012-77c8-4a57-b217-19089c4a9d17-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-w7sjh\" (UID: \"edfb6012-77c8-4a57-b217-19089c4a9d17\") " pod="openstack/dnsmasq-dns-8b5c85b87-w7sjh" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.179348 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\") " pod="openstack/ceilometer-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.179368 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-run-httpd\") pod \"ceilometer-0\" (UID: \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\") " pod="openstack/ceilometer-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.179393 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-config-data\") pod \"ceilometer-0\" (UID: \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\") " pod="openstack/ceilometer-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.179437 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-scripts\") pod \"ceilometer-0\" (UID: \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\") " pod="openstack/ceilometer-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.179469 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/edfb6012-77c8-4a57-b217-19089c4a9d17-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-w7sjh\" (UID: \"edfb6012-77c8-4a57-b217-19089c4a9d17\") " pod="openstack/dnsmasq-dns-8b5c85b87-w7sjh" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.179497 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-log-httpd\") pod \"ceilometer-0\" (UID: \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\") " pod="openstack/ceilometer-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.179527 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edfb6012-77c8-4a57-b217-19089c4a9d17-config\") pod \"dnsmasq-dns-8b5c85b87-w7sjh\" (UID: \"edfb6012-77c8-4a57-b217-19089c4a9d17\") " pod="openstack/dnsmasq-dns-8b5c85b87-w7sjh" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.179575 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/edfb6012-77c8-4a57-b217-19089c4a9d17-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-w7sjh\" (UID: \"edfb6012-77c8-4a57-b217-19089c4a9d17\") " pod="openstack/dnsmasq-dns-8b5c85b87-w7sjh" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.180538 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/edfb6012-77c8-4a57-b217-19089c4a9d17-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-w7sjh\" (UID: \"edfb6012-77c8-4a57-b217-19089c4a9d17\") " pod="openstack/dnsmasq-dns-8b5c85b87-w7sjh" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.181195 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/edfb6012-77c8-4a57-b217-19089c4a9d17-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-w7sjh\" (UID: \"edfb6012-77c8-4a57-b217-19089c4a9d17\") " pod="openstack/dnsmasq-dns-8b5c85b87-w7sjh" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.181825 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-run-httpd\") pod \"ceilometer-0\" (UID: \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\") " pod="openstack/ceilometer-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.183115 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/edfb6012-77c8-4a57-b217-19089c4a9d17-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-w7sjh\" (UID: \"edfb6012-77c8-4a57-b217-19089c4a9d17\") " pod="openstack/dnsmasq-dns-8b5c85b87-w7sjh" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.184403 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/edfb6012-77c8-4a57-b217-19089c4a9d17-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-w7sjh\" (UID: \"edfb6012-77c8-4a57-b217-19089c4a9d17\") " pod="openstack/dnsmasq-dns-8b5c85b87-w7sjh" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.184833 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-log-httpd\") pod \"ceilometer-0\" (UID: \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\") " pod="openstack/ceilometer-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.184840 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kb47t" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.185012 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edfb6012-77c8-4a57-b217-19089c4a9d17-config\") pod \"dnsmasq-dns-8b5c85b87-w7sjh\" (UID: \"edfb6012-77c8-4a57-b217-19089c4a9d17\") " pod="openstack/dnsmasq-dns-8b5c85b87-w7sjh" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.186549 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-scripts\") pod \"ceilometer-0\" (UID: \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\") " pod="openstack/ceilometer-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.186914 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-config-data\") pod \"ceilometer-0\" (UID: \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\") " pod="openstack/ceilometer-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.200277 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\") " pod="openstack/ceilometer-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.202600 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\") " pod="openstack/ceilometer-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.203589 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smfpg\" (UniqueName: \"kubernetes.io/projected/edfb6012-77c8-4a57-b217-19089c4a9d17-kube-api-access-smfpg\") pod \"dnsmasq-dns-8b5c85b87-w7sjh\" (UID: \"edfb6012-77c8-4a57-b217-19089c4a9d17\") " pod="openstack/dnsmasq-dns-8b5c85b87-w7sjh" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.210116 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvxgw\" (UniqueName: \"kubernetes.io/projected/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-kube-api-access-hvxgw\") pod \"ceilometer-0\" (UID: \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\") " pod="openstack/ceilometer-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.279816 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dkrw7" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.301815 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.335167 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-w7sjh" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.379429 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.381158 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.383703 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.383965 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.384174 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.385408 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-drvxw" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.402261 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.484071 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55350420-790f-4f9e-8ba6-3872bd55ceb8-logs\") pod \"glance-default-external-api-0\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") " pod="openstack/glance-default-external-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.484145 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55350420-790f-4f9e-8ba6-3872bd55ceb8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") " pod="openstack/glance-default-external-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.484183 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55350420-790f-4f9e-8ba6-3872bd55ceb8-config-data\") pod \"glance-default-external-api-0\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") " pod="openstack/glance-default-external-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.484219 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") " pod="openstack/glance-default-external-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.484458 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rxmh\" (UniqueName: \"kubernetes.io/projected/55350420-790f-4f9e-8ba6-3872bd55ceb8-kube-api-access-7rxmh\") pod \"glance-default-external-api-0\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") " pod="openstack/glance-default-external-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.484557 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55350420-790f-4f9e-8ba6-3872bd55ceb8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") " pod="openstack/glance-default-external-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.484598 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55350420-790f-4f9e-8ba6-3872bd55ceb8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") " pod="openstack/glance-default-external-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.484680 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55350420-790f-4f9e-8ba6-3872bd55ceb8-scripts\") pod \"glance-default-external-api-0\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") " pod="openstack/glance-default-external-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.586045 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55350420-790f-4f9e-8ba6-3872bd55ceb8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") " pod="openstack/glance-default-external-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.586098 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55350420-790f-4f9e-8ba6-3872bd55ceb8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") " pod="openstack/glance-default-external-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.586161 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55350420-790f-4f9e-8ba6-3872bd55ceb8-scripts\") pod \"glance-default-external-api-0\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") " pod="openstack/glance-default-external-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.586259 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55350420-790f-4f9e-8ba6-3872bd55ceb8-logs\") pod \"glance-default-external-api-0\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") " pod="openstack/glance-default-external-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.586311 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55350420-790f-4f9e-8ba6-3872bd55ceb8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") " pod="openstack/glance-default-external-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.586359 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55350420-790f-4f9e-8ba6-3872bd55ceb8-config-data\") pod \"glance-default-external-api-0\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") " pod="openstack/glance-default-external-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.586417 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") " pod="openstack/glance-default-external-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.586488 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rxmh\" (UniqueName: \"kubernetes.io/projected/55350420-790f-4f9e-8ba6-3872bd55ceb8-kube-api-access-7rxmh\") pod \"glance-default-external-api-0\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") " pod="openstack/glance-default-external-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.587598 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.587693 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55350420-790f-4f9e-8ba6-3872bd55ceb8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") " pod="openstack/glance-default-external-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.587895 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55350420-790f-4f9e-8ba6-3872bd55ceb8-logs\") pod \"glance-default-external-api-0\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") " pod="openstack/glance-default-external-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.591578 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55350420-790f-4f9e-8ba6-3872bd55ceb8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") " pod="openstack/glance-default-external-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.592091 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55350420-790f-4f9e-8ba6-3872bd55ceb8-config-data\") pod \"glance-default-external-api-0\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") " pod="openstack/glance-default-external-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.592405 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55350420-790f-4f9e-8ba6-3872bd55ceb8-scripts\") pod \"glance-default-external-api-0\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") " pod="openstack/glance-default-external-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.592599 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55350420-790f-4f9e-8ba6-3872bd55ceb8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") " pod="openstack/glance-default-external-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.605268 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.609109 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.614655 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.615085 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.619296 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.624351 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rxmh\" (UniqueName: \"kubernetes.io/projected/55350420-790f-4f9e-8ba6-3872bd55ceb8-kube-api-access-7rxmh\") pod \"glance-default-external-api-0\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") " pod="openstack/glance-default-external-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.643991 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") " pod="openstack/glance-default-external-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.688158 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f172078-587d-4ee6-a804-0b5168bdb3dc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.688238 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cchkq\" (UniqueName: \"kubernetes.io/projected/8f172078-587d-4ee6-a804-0b5168bdb3dc-kube-api-access-cchkq\") pod \"glance-default-internal-api-0\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.688477 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f172078-587d-4ee6-a804-0b5168bdb3dc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.688577 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f172078-587d-4ee6-a804-0b5168bdb3dc-logs\") pod \"glance-default-internal-api-0\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.688622 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f172078-587d-4ee6-a804-0b5168bdb3dc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.688682 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f172078-587d-4ee6-a804-0b5168bdb3dc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.688814 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.688861 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f172078-587d-4ee6-a804-0b5168bdb3dc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.698412 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.790090 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f172078-587d-4ee6-a804-0b5168bdb3dc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.790461 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f172078-587d-4ee6-a804-0b5168bdb3dc-logs\") pod \"glance-default-internal-api-0\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.790503 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f172078-587d-4ee6-a804-0b5168bdb3dc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.790540 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f172078-587d-4ee6-a804-0b5168bdb3dc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.790595 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.790627 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f172078-587d-4ee6-a804-0b5168bdb3dc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.790677 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f172078-587d-4ee6-a804-0b5168bdb3dc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.790719 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cchkq\" (UniqueName: \"kubernetes.io/projected/8f172078-587d-4ee6-a804-0b5168bdb3dc-kube-api-access-cchkq\") pod \"glance-default-internal-api-0\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.791056 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f172078-587d-4ee6-a804-0b5168bdb3dc-logs\") pod \"glance-default-internal-api-0\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.791155 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.791326 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f172078-587d-4ee6-a804-0b5168bdb3dc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.796783 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f172078-587d-4ee6-a804-0b5168bdb3dc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.797114 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f172078-587d-4ee6-a804-0b5168bdb3dc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.798540 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f172078-587d-4ee6-a804-0b5168bdb3dc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.799932 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f172078-587d-4ee6-a804-0b5168bdb3dc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.815036 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cchkq\" (UniqueName: \"kubernetes.io/projected/8f172078-587d-4ee6-a804-0b5168bdb3dc-kube-api-access-cchkq\") pod \"glance-default-internal-api-0\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:34:52 crc kubenswrapper[4689]: I1210 12:34:52.828565 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:34:53 crc kubenswrapper[4689]: I1210 12:34:53.011850 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 12:34:53 crc kubenswrapper[4689]: I1210 12:34:53.015553 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7ff5475cc9-jtfkf" podUID="0be1a0e4-d74f-4537-b184-eee8c421275a" containerName="dnsmasq-dns" containerID="cri-o://4d0f79cba7a73e92e89e8639f599c41ce98f73a5cbd62f148dba2796155b00e2" gracePeriod=10 Dec 10 12:34:53 crc kubenswrapper[4689]: I1210 12:34:53.779084 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 12:34:53 crc kubenswrapper[4689]: I1210 12:34:53.854639 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:34:53 crc kubenswrapper[4689]: I1210 12:34:53.876347 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 12:34:55 crc kubenswrapper[4689]: I1210 12:34:55.067236 4689 generic.go:334] "Generic (PLEG): container finished" podID="0be1a0e4-d74f-4537-b184-eee8c421275a" containerID="4d0f79cba7a73e92e89e8639f599c41ce98f73a5cbd62f148dba2796155b00e2" exitCode=0 Dec 10 12:34:55 crc kubenswrapper[4689]: I1210 12:34:55.067775 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-jtfkf" event={"ID":"0be1a0e4-d74f-4537-b184-eee8c421275a","Type":"ContainerDied","Data":"4d0f79cba7a73e92e89e8639f599c41ce98f73a5cbd62f148dba2796155b00e2"} Dec 10 12:34:55 crc kubenswrapper[4689]: I1210 12:34:55.275737 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-7z4m6"] Dec 10 12:34:55 crc kubenswrapper[4689]: I1210 12:34:55.509273 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-57ffb"] Dec 10 12:34:56 crc kubenswrapper[4689]: I1210 12:34:56.078355 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-7z4m6" event={"ID":"cdd4ece3-0d47-424e-8408-51f9fce3af5d","Type":"ContainerStarted","Data":"cd9c1dedbd4b8c34ce9d9997cbbe1607db54d627bf2b2cd3c23f5dc06e0c37ad"} Dec 10 12:34:56 crc kubenswrapper[4689]: I1210 12:34:56.079897 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-57ffb" event={"ID":"eb62506f-d5a5-44b0-8da3-125128211e10","Type":"ContainerStarted","Data":"8b096f3774721e38ee6a5876140a433187484ff3a18bc6ad775acab3fad684ef"} Dec 10 12:34:56 crc kubenswrapper[4689]: I1210 12:34:56.152951 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-c5fbw"] Dec 10 12:34:56 crc kubenswrapper[4689]: I1210 12:34:56.161876 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-7xjms"] Dec 10 12:34:56 crc kubenswrapper[4689]: W1210 12:34:56.162269 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod048101b4_0993_42cb_bac6_d9384a242856.slice/crio-e84ce448d3ee79acd9ac780276bbe9474ce9636e6d2e35f71b3bdac5930ec81a WatchSource:0}: Error finding container e84ce448d3ee79acd9ac780276bbe9474ce9636e6d2e35f71b3bdac5930ec81a: Status 404 returned error can't find the container with id e84ce448d3ee79acd9ac780276bbe9474ce9636e6d2e35f71b3bdac5930ec81a Dec 10 12:34:56 crc kubenswrapper[4689]: W1210 12:34:56.163583 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9880fdc4_4a6b_4353_9b05_fefd96248c09.slice/crio-fda3aa2dbe782123442b56cc48bcd43497b3b6ab5895ff6535b9330181fa0cb0 WatchSource:0}: Error finding container fda3aa2dbe782123442b56cc48bcd43497b3b6ab5895ff6535b9330181fa0cb0: Status 404 returned error can't find the container with id fda3aa2dbe782123442b56cc48bcd43497b3b6ab5895ff6535b9330181fa0cb0 Dec 10 12:34:56 crc kubenswrapper[4689]: I1210 12:34:56.171899 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-w7sjh"] Dec 10 12:34:56 crc kubenswrapper[4689]: I1210 12:34:56.185038 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-w7tlj"] Dec 10 12:34:56 crc kubenswrapper[4689]: I1210 12:34:56.194361 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-kb47t"] Dec 10 12:34:56 crc kubenswrapper[4689]: I1210 12:34:56.205089 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 12:34:56 crc kubenswrapper[4689]: I1210 12:34:56.215507 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-32f3-account-create-update-qwdgk"] Dec 10 12:34:56 crc kubenswrapper[4689]: I1210 12:34:56.228122 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-dkrw7"] Dec 10 12:34:56 crc kubenswrapper[4689]: I1210 12:34:56.233598 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:34:56 crc kubenswrapper[4689]: W1210 12:34:56.251011 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode10d322a_3fb7_451d_9a38_f2659e3d32e5.slice/crio-3f96f731e226d703cb10e43f761eec0d9a3fc4886629b87edd0e7caccc24096c WatchSource:0}: Error finding container 3f96f731e226d703cb10e43f761eec0d9a3fc4886629b87edd0e7caccc24096c: Status 404 returned error can't find the container with id 3f96f731e226d703cb10e43f761eec0d9a3fc4886629b87edd0e7caccc24096c Dec 10 12:34:56 crc kubenswrapper[4689]: W1210 12:34:56.251387 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedfb6012_77c8_4a57_b217_19089c4a9d17.slice/crio-e5103f1405f3349499dd1e259da3505ccba697733f4f2035f0fccb2eef42baed WatchSource:0}: Error finding container e5103f1405f3349499dd1e259da3505ccba697733f4f2035f0fccb2eef42baed: Status 404 returned error can't find the container with id e5103f1405f3349499dd1e259da3505ccba697733f4f2035f0fccb2eef42baed Dec 10 12:34:56 crc kubenswrapper[4689]: W1210 12:34:56.264093 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a323df3_5a3b_41cb_afc0_cbd2e4933ec0.slice/crio-b1fd20d2cfe9a0571c94e8d467713ce91944e57445f0c01ac31685eb2ab075bc WatchSource:0}: Error finding container b1fd20d2cfe9a0571c94e8d467713ce91944e57445f0c01ac31685eb2ab075bc: Status 404 returned error can't find the container with id b1fd20d2cfe9a0571c94e8d467713ce91944e57445f0c01ac31685eb2ab075bc Dec 10 12:34:56 crc kubenswrapper[4689]: W1210 12:34:56.266543 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1dfb9cf4_7f75_40c2_ade2_81a91be5ad12.slice/crio-5a25e1c94167f5fa20abe190fe321119792df394ecfb710c414eb1c2ef7b4197 WatchSource:0}: Error finding container 5a25e1c94167f5fa20abe190fe321119792df394ecfb710c414eb1c2ef7b4197: Status 404 returned error can't find the container with id 5a25e1c94167f5fa20abe190fe321119792df394ecfb710c414eb1c2ef7b4197 Dec 10 12:34:56 crc kubenswrapper[4689]: I1210 12:34:56.298570 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 12:34:56 crc kubenswrapper[4689]: I1210 12:34:56.535461 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-jtfkf" Dec 10 12:34:56 crc kubenswrapper[4689]: I1210 12:34:56.558326 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0be1a0e4-d74f-4537-b184-eee8c421275a-ovsdbserver-nb\") pod \"0be1a0e4-d74f-4537-b184-eee8c421275a\" (UID: \"0be1a0e4-d74f-4537-b184-eee8c421275a\") " Dec 10 12:34:56 crc kubenswrapper[4689]: I1210 12:34:56.558399 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0be1a0e4-d74f-4537-b184-eee8c421275a-ovsdbserver-sb\") pod \"0be1a0e4-d74f-4537-b184-eee8c421275a\" (UID: \"0be1a0e4-d74f-4537-b184-eee8c421275a\") " Dec 10 12:34:56 crc kubenswrapper[4689]: I1210 12:34:56.558424 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92c6w\" (UniqueName: \"kubernetes.io/projected/0be1a0e4-d74f-4537-b184-eee8c421275a-kube-api-access-92c6w\") pod \"0be1a0e4-d74f-4537-b184-eee8c421275a\" (UID: \"0be1a0e4-d74f-4537-b184-eee8c421275a\") " Dec 10 12:34:56 crc kubenswrapper[4689]: I1210 12:34:56.558469 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0be1a0e4-d74f-4537-b184-eee8c421275a-dns-swift-storage-0\") pod \"0be1a0e4-d74f-4537-b184-eee8c421275a\" (UID: \"0be1a0e4-d74f-4537-b184-eee8c421275a\") " Dec 10 12:34:56 crc kubenswrapper[4689]: I1210 12:34:56.558512 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0be1a0e4-d74f-4537-b184-eee8c421275a-config\") pod \"0be1a0e4-d74f-4537-b184-eee8c421275a\" (UID: \"0be1a0e4-d74f-4537-b184-eee8c421275a\") " Dec 10 12:34:56 crc kubenswrapper[4689]: I1210 12:34:56.558646 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0be1a0e4-d74f-4537-b184-eee8c421275a-dns-svc\") pod \"0be1a0e4-d74f-4537-b184-eee8c421275a\" (UID: \"0be1a0e4-d74f-4537-b184-eee8c421275a\") " Dec 10 12:34:56 crc kubenswrapper[4689]: I1210 12:34:56.566620 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0be1a0e4-d74f-4537-b184-eee8c421275a-kube-api-access-92c6w" (OuterVolumeSpecName: "kube-api-access-92c6w") pod "0be1a0e4-d74f-4537-b184-eee8c421275a" (UID: "0be1a0e4-d74f-4537-b184-eee8c421275a"). InnerVolumeSpecName "kube-api-access-92c6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:34:56 crc kubenswrapper[4689]: I1210 12:34:56.661388 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92c6w\" (UniqueName: \"kubernetes.io/projected/0be1a0e4-d74f-4537-b184-eee8c421275a-kube-api-access-92c6w\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:56 crc kubenswrapper[4689]: I1210 12:34:56.800046 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0be1a0e4-d74f-4537-b184-eee8c421275a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0be1a0e4-d74f-4537-b184-eee8c421275a" (UID: "0be1a0e4-d74f-4537-b184-eee8c421275a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:56 crc kubenswrapper[4689]: I1210 12:34:56.805490 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0be1a0e4-d74f-4537-b184-eee8c421275a-config" (OuterVolumeSpecName: "config") pod "0be1a0e4-d74f-4537-b184-eee8c421275a" (UID: "0be1a0e4-d74f-4537-b184-eee8c421275a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:56 crc kubenswrapper[4689]: I1210 12:34:56.811632 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0be1a0e4-d74f-4537-b184-eee8c421275a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0be1a0e4-d74f-4537-b184-eee8c421275a" (UID: "0be1a0e4-d74f-4537-b184-eee8c421275a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:56 crc kubenswrapper[4689]: I1210 12:34:56.812730 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0be1a0e4-d74f-4537-b184-eee8c421275a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0be1a0e4-d74f-4537-b184-eee8c421275a" (UID: "0be1a0e4-d74f-4537-b184-eee8c421275a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:56 crc kubenswrapper[4689]: I1210 12:34:56.823668 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0be1a0e4-d74f-4537-b184-eee8c421275a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0be1a0e4-d74f-4537-b184-eee8c421275a" (UID: "0be1a0e4-d74f-4537-b184-eee8c421275a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:56 crc kubenswrapper[4689]: I1210 12:34:56.865339 4689 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0be1a0e4-d74f-4537-b184-eee8c421275a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:56 crc kubenswrapper[4689]: I1210 12:34:56.865363 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0be1a0e4-d74f-4537-b184-eee8c421275a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:56 crc kubenswrapper[4689]: I1210 12:34:56.865373 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0be1a0e4-d74f-4537-b184-eee8c421275a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:56 crc kubenswrapper[4689]: I1210 12:34:56.865382 4689 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0be1a0e4-d74f-4537-b184-eee8c421275a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:56 crc kubenswrapper[4689]: I1210 12:34:56.865390 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0be1a0e4-d74f-4537-b184-eee8c421275a-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.091679 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dkrw7" event={"ID":"e10d322a-3fb7-451d-9a38-f2659e3d32e5","Type":"ContainerStarted","Data":"3f96f731e226d703cb10e43f761eec0d9a3fc4886629b87edd0e7caccc24096c"} Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.094061 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7xjms" event={"ID":"ae5c2d84-08cd-462f-b8d5-ba416353f365","Type":"ContainerStarted","Data":"fd0e9f96f428a8ece5e066305085dee4c12d92dd9614b5415c98d83ab2881027"} Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.095542 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"55350420-790f-4f9e-8ba6-3872bd55ceb8","Type":"ContainerStarted","Data":"40cd5674915e25b25713bb52496d8058231eb10acb4765a06a6dcfb20241df9e"} Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.097965 4689 generic.go:334] "Generic (PLEG): container finished" podID="9880fdc4-4a6b-4353-9b05-fefd96248c09" containerID="05a0f473503937b36731f0677cfdebfd2c9dcd7883a18349c913d982b605df10" exitCode=0 Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.098018 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-c5fbw" event={"ID":"9880fdc4-4a6b-4353-9b05-fefd96248c09","Type":"ContainerDied","Data":"05a0f473503937b36731f0677cfdebfd2c9dcd7883a18349c913d982b605df10"} Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.098032 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-c5fbw" event={"ID":"9880fdc4-4a6b-4353-9b05-fefd96248c09","Type":"ContainerStarted","Data":"fda3aa2dbe782123442b56cc48bcd43497b3b6ab5895ff6535b9330181fa0cb0"} Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.100222 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-w7tlj" event={"ID":"048101b4-0993-42cb-bac6-d9384a242856","Type":"ContainerStarted","Data":"926623c39956cbbb50fcf8167fa246964bf21edafad9db44a292a6bf55cf7ddd"} Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.100247 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-w7tlj" event={"ID":"048101b4-0993-42cb-bac6-d9384a242856","Type":"ContainerStarted","Data":"e84ce448d3ee79acd9ac780276bbe9474ce9636e6d2e35f71b3bdac5930ec81a"} Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.104092 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-jtfkf" event={"ID":"0be1a0e4-d74f-4537-b184-eee8c421275a","Type":"ContainerDied","Data":"0414902e800164d1ebbadd81a059ca6d2bf9e3a052d01817052960f4902bc291"} Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.104131 4689 scope.go:117] "RemoveContainer" containerID="4d0f79cba7a73e92e89e8639f599c41ce98f73a5cbd62f148dba2796155b00e2" Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.104221 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-jtfkf" Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.151468 4689 generic.go:334] "Generic (PLEG): container finished" podID="0a323df3-5a3b-41cb-afc0-cbd2e4933ec0" containerID="38ee574efe9ad80ddcb9269b1b2c3ca75c2de074710a3a5f0909610f7224db31" exitCode=0 Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.151672 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-32f3-account-create-update-qwdgk" event={"ID":"0a323df3-5a3b-41cb-afc0-cbd2e4933ec0","Type":"ContainerDied","Data":"38ee574efe9ad80ddcb9269b1b2c3ca75c2de074710a3a5f0909610f7224db31"} Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.151799 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-32f3-account-create-update-qwdgk" event={"ID":"0a323df3-5a3b-41cb-afc0-cbd2e4933ec0","Type":"ContainerStarted","Data":"b1fd20d2cfe9a0571c94e8d467713ce91944e57445f0c01ac31685eb2ab075bc"} Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.167599 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12","Type":"ContainerStarted","Data":"5a25e1c94167f5fa20abe190fe321119792df394ecfb710c414eb1c2ef7b4197"} Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.174243 4689 generic.go:334] "Generic (PLEG): container finished" podID="cdd4ece3-0d47-424e-8408-51f9fce3af5d" containerID="4c74c247795dd7a32e4d0b60e6f102272cc30a9b26e76af4543b254feab98288" exitCode=0 Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.174357 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-7z4m6" event={"ID":"cdd4ece3-0d47-424e-8408-51f9fce3af5d","Type":"ContainerDied","Data":"4c74c247795dd7a32e4d0b60e6f102272cc30a9b26e76af4543b254feab98288"} Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.176804 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-w7tlj" podStartSLOduration=6.176785474 podStartE2EDuration="6.176785474s" podCreationTimestamp="2025-12-10 12:34:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:34:57.159928405 +0000 UTC m=+1164.948009543" watchObservedRunningTime="2025-12-10 12:34:57.176785474 +0000 UTC m=+1164.964866612" Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.217314 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8f172078-587d-4ee6-a804-0b5168bdb3dc","Type":"ContainerStarted","Data":"391cb163d6c633c21c0a5f1eee2d89c01362e9343307691a8ab4179cb3f6ab0a"} Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.217437 4689 scope.go:117] "RemoveContainer" containerID="c0443236043802f655ef1e16d76fdbe91c805da50d07ef5dbcc5b08af37d2956" Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.263678 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kb47t" event={"ID":"79d420d1-6ba7-4cf2-9e13-b046a65d378c","Type":"ContainerStarted","Data":"d38bf7b66608e6c3d7f782ed0b0e1e26001a4bbea79285f0ebc5819b4503ac2d"} Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.263729 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kb47t" event={"ID":"79d420d1-6ba7-4cf2-9e13-b046a65d378c","Type":"ContainerStarted","Data":"8c7205ffcef8e458ff85def13cfeea6941176a2fab0ddae26d4242fc39552cc9"} Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.278187 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-jtfkf"] Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.292738 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-jtfkf"] Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.320758 4689 generic.go:334] "Generic (PLEG): container finished" podID="edfb6012-77c8-4a57-b217-19089c4a9d17" containerID="62247004df63262efca8e0474093d3f91d333703b1b40580b8bfbce76fa1efa8" exitCode=0 Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.320827 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-w7sjh" event={"ID":"edfb6012-77c8-4a57-b217-19089c4a9d17","Type":"ContainerDied","Data":"62247004df63262efca8e0474093d3f91d333703b1b40580b8bfbce76fa1efa8"} Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.320858 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-w7sjh" event={"ID":"edfb6012-77c8-4a57-b217-19089c4a9d17","Type":"ContainerStarted","Data":"e5103f1405f3349499dd1e259da3505ccba697733f4f2035f0fccb2eef42baed"} Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.373609 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-kb47t" podStartSLOduration=6.373591079 podStartE2EDuration="6.373591079s" podCreationTimestamp="2025-12-10 12:34:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:34:57.285283457 +0000 UTC m=+1165.073364595" watchObservedRunningTime="2025-12-10 12:34:57.373591079 +0000 UTC m=+1165.161672217" Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.736418 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-7z4m6" Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.791465 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cdd4ece3-0d47-424e-8408-51f9fce3af5d-dns-swift-storage-0\") pod \"cdd4ece3-0d47-424e-8408-51f9fce3af5d\" (UID: \"cdd4ece3-0d47-424e-8408-51f9fce3af5d\") " Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.791532 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdd4ece3-0d47-424e-8408-51f9fce3af5d-dns-svc\") pod \"cdd4ece3-0d47-424e-8408-51f9fce3af5d\" (UID: \"cdd4ece3-0d47-424e-8408-51f9fce3af5d\") " Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.793151 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcm9h\" (UniqueName: \"kubernetes.io/projected/cdd4ece3-0d47-424e-8408-51f9fce3af5d-kube-api-access-rcm9h\") pod \"cdd4ece3-0d47-424e-8408-51f9fce3af5d\" (UID: \"cdd4ece3-0d47-424e-8408-51f9fce3af5d\") " Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.793179 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd4ece3-0d47-424e-8408-51f9fce3af5d-config\") pod \"cdd4ece3-0d47-424e-8408-51f9fce3af5d\" (UID: \"cdd4ece3-0d47-424e-8408-51f9fce3af5d\") " Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.793214 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdd4ece3-0d47-424e-8408-51f9fce3af5d-ovsdbserver-sb\") pod \"cdd4ece3-0d47-424e-8408-51f9fce3af5d\" (UID: \"cdd4ece3-0d47-424e-8408-51f9fce3af5d\") " Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.793347 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdd4ece3-0d47-424e-8408-51f9fce3af5d-ovsdbserver-nb\") pod \"cdd4ece3-0d47-424e-8408-51f9fce3af5d\" (UID: \"cdd4ece3-0d47-424e-8408-51f9fce3af5d\") " Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.805046 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdd4ece3-0d47-424e-8408-51f9fce3af5d-kube-api-access-rcm9h" (OuterVolumeSpecName: "kube-api-access-rcm9h") pod "cdd4ece3-0d47-424e-8408-51f9fce3af5d" (UID: "cdd4ece3-0d47-424e-8408-51f9fce3af5d"). InnerVolumeSpecName "kube-api-access-rcm9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.818439 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd4ece3-0d47-424e-8408-51f9fce3af5d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cdd4ece3-0d47-424e-8408-51f9fce3af5d" (UID: "cdd4ece3-0d47-424e-8408-51f9fce3af5d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.820537 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd4ece3-0d47-424e-8408-51f9fce3af5d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cdd4ece3-0d47-424e-8408-51f9fce3af5d" (UID: "cdd4ece3-0d47-424e-8408-51f9fce3af5d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.829928 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd4ece3-0d47-424e-8408-51f9fce3af5d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cdd4ece3-0d47-424e-8408-51f9fce3af5d" (UID: "cdd4ece3-0d47-424e-8408-51f9fce3af5d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.872421 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd4ece3-0d47-424e-8408-51f9fce3af5d-config" (OuterVolumeSpecName: "config") pod "cdd4ece3-0d47-424e-8408-51f9fce3af5d" (UID: "cdd4ece3-0d47-424e-8408-51f9fce3af5d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.886322 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd4ece3-0d47-424e-8408-51f9fce3af5d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cdd4ece3-0d47-424e-8408-51f9fce3af5d" (UID: "cdd4ece3-0d47-424e-8408-51f9fce3af5d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.895489 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdd4ece3-0d47-424e-8408-51f9fce3af5d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.895526 4689 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cdd4ece3-0d47-424e-8408-51f9fce3af5d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.895539 4689 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdd4ece3-0d47-424e-8408-51f9fce3af5d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.895549 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcm9h\" (UniqueName: \"kubernetes.io/projected/cdd4ece3-0d47-424e-8408-51f9fce3af5d-kube-api-access-rcm9h\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.895562 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd4ece3-0d47-424e-8408-51f9fce3af5d-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:57 crc kubenswrapper[4689]: I1210 12:34:57.895572 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdd4ece3-0d47-424e-8408-51f9fce3af5d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:58 crc kubenswrapper[4689]: I1210 12:34:58.332799 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-7z4m6" event={"ID":"cdd4ece3-0d47-424e-8408-51f9fce3af5d","Type":"ContainerDied","Data":"cd9c1dedbd4b8c34ce9d9997cbbe1607db54d627bf2b2cd3c23f5dc06e0c37ad"} Dec 10 12:34:58 crc kubenswrapper[4689]: I1210 12:34:58.333147 4689 scope.go:117] "RemoveContainer" containerID="4c74c247795dd7a32e4d0b60e6f102272cc30a9b26e76af4543b254feab98288" Dec 10 12:34:58 crc kubenswrapper[4689]: I1210 12:34:58.334410 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-7z4m6" Dec 10 12:34:58 crc kubenswrapper[4689]: I1210 12:34:58.339597 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8f172078-587d-4ee6-a804-0b5168bdb3dc","Type":"ContainerStarted","Data":"2e8dfe7fa3f93de94be7de4bab8e155d46b77a3c80c4d446fa3be93312ae8715"} Dec 10 12:34:58 crc kubenswrapper[4689]: I1210 12:34:58.343921 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-w7sjh" event={"ID":"edfb6012-77c8-4a57-b217-19089c4a9d17","Type":"ContainerStarted","Data":"7b3ca4d4490995ce3c4cdd048536970eb37033cd4e5cd5010e2ca0532d1d16ca"} Dec 10 12:34:58 crc kubenswrapper[4689]: I1210 12:34:58.347161 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b5c85b87-w7sjh" Dec 10 12:34:58 crc kubenswrapper[4689]: I1210 12:34:58.350342 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"55350420-790f-4f9e-8ba6-3872bd55ceb8","Type":"ContainerStarted","Data":"c8467361e3624c4d9a2432181ec402481c1c135635309d44b19e1a5259fa8e31"} Dec 10 12:34:58 crc kubenswrapper[4689]: I1210 12:34:58.376322 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b5c85b87-w7sjh" podStartSLOduration=7.376304521 podStartE2EDuration="7.376304521s" podCreationTimestamp="2025-12-10 12:34:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:34:58.367405611 +0000 UTC m=+1166.155486749" watchObservedRunningTime="2025-12-10 12:34:58.376304521 +0000 UTC m=+1166.164385659" Dec 10 12:34:58 crc kubenswrapper[4689]: I1210 12:34:58.408812 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-7z4m6"] Dec 10 12:34:58 crc kubenswrapper[4689]: I1210 12:34:58.422234 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-7z4m6"] Dec 10 12:34:58 crc kubenswrapper[4689]: I1210 12:34:58.524327 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0be1a0e4-d74f-4537-b184-eee8c421275a" path="/var/lib/kubelet/pods/0be1a0e4-d74f-4537-b184-eee8c421275a/volumes" Dec 10 12:34:58 crc kubenswrapper[4689]: I1210 12:34:58.525202 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdd4ece3-0d47-424e-8408-51f9fce3af5d" path="/var/lib/kubelet/pods/cdd4ece3-0d47-424e-8408-51f9fce3af5d/volumes" Dec 10 12:34:58 crc kubenswrapper[4689]: I1210 12:34:58.899792 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-c5fbw" Dec 10 12:34:58 crc kubenswrapper[4689]: I1210 12:34:58.911518 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-32f3-account-create-update-qwdgk" Dec 10 12:34:59 crc kubenswrapper[4689]: I1210 12:34:59.016679 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgtxt\" (UniqueName: \"kubernetes.io/projected/9880fdc4-4a6b-4353-9b05-fefd96248c09-kube-api-access-lgtxt\") pod \"9880fdc4-4a6b-4353-9b05-fefd96248c09\" (UID: \"9880fdc4-4a6b-4353-9b05-fefd96248c09\") " Dec 10 12:34:59 crc kubenswrapper[4689]: I1210 12:34:59.016784 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9880fdc4-4a6b-4353-9b05-fefd96248c09-operator-scripts\") pod \"9880fdc4-4a6b-4353-9b05-fefd96248c09\" (UID: \"9880fdc4-4a6b-4353-9b05-fefd96248c09\") " Dec 10 12:34:59 crc kubenswrapper[4689]: I1210 12:34:59.016811 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a323df3-5a3b-41cb-afc0-cbd2e4933ec0-operator-scripts\") pod \"0a323df3-5a3b-41cb-afc0-cbd2e4933ec0\" (UID: \"0a323df3-5a3b-41cb-afc0-cbd2e4933ec0\") " Dec 10 12:34:59 crc kubenswrapper[4689]: I1210 12:34:59.016835 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m4lp\" (UniqueName: \"kubernetes.io/projected/0a323df3-5a3b-41cb-afc0-cbd2e4933ec0-kube-api-access-2m4lp\") pod \"0a323df3-5a3b-41cb-afc0-cbd2e4933ec0\" (UID: \"0a323df3-5a3b-41cb-afc0-cbd2e4933ec0\") " Dec 10 12:34:59 crc kubenswrapper[4689]: I1210 12:34:59.018616 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9880fdc4-4a6b-4353-9b05-fefd96248c09-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9880fdc4-4a6b-4353-9b05-fefd96248c09" (UID: "9880fdc4-4a6b-4353-9b05-fefd96248c09"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:59 crc kubenswrapper[4689]: I1210 12:34:59.018629 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a323df3-5a3b-41cb-afc0-cbd2e4933ec0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0a323df3-5a3b-41cb-afc0-cbd2e4933ec0" (UID: "0a323df3-5a3b-41cb-afc0-cbd2e4933ec0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:34:59 crc kubenswrapper[4689]: I1210 12:34:59.019050 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9880fdc4-4a6b-4353-9b05-fefd96248c09-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:59 crc kubenswrapper[4689]: I1210 12:34:59.019078 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a323df3-5a3b-41cb-afc0-cbd2e4933ec0-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:59 crc kubenswrapper[4689]: I1210 12:34:59.037068 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9880fdc4-4a6b-4353-9b05-fefd96248c09-kube-api-access-lgtxt" (OuterVolumeSpecName: "kube-api-access-lgtxt") pod "9880fdc4-4a6b-4353-9b05-fefd96248c09" (UID: "9880fdc4-4a6b-4353-9b05-fefd96248c09"). InnerVolumeSpecName "kube-api-access-lgtxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:34:59 crc kubenswrapper[4689]: I1210 12:34:59.037367 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a323df3-5a3b-41cb-afc0-cbd2e4933ec0-kube-api-access-2m4lp" (OuterVolumeSpecName: "kube-api-access-2m4lp") pod "0a323df3-5a3b-41cb-afc0-cbd2e4933ec0" (UID: "0a323df3-5a3b-41cb-afc0-cbd2e4933ec0"). InnerVolumeSpecName "kube-api-access-2m4lp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:34:59 crc kubenswrapper[4689]: I1210 12:34:59.120737 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgtxt\" (UniqueName: \"kubernetes.io/projected/9880fdc4-4a6b-4353-9b05-fefd96248c09-kube-api-access-lgtxt\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:59 crc kubenswrapper[4689]: I1210 12:34:59.120780 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2m4lp\" (UniqueName: \"kubernetes.io/projected/0a323df3-5a3b-41cb-afc0-cbd2e4933ec0-kube-api-access-2m4lp\") on node \"crc\" DevicePath \"\"" Dec 10 12:34:59 crc kubenswrapper[4689]: I1210 12:34:59.365417 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-32f3-account-create-update-qwdgk" Dec 10 12:34:59 crc kubenswrapper[4689]: I1210 12:34:59.365423 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-32f3-account-create-update-qwdgk" event={"ID":"0a323df3-5a3b-41cb-afc0-cbd2e4933ec0","Type":"ContainerDied","Data":"b1fd20d2cfe9a0571c94e8d467713ce91944e57445f0c01ac31685eb2ab075bc"} Dec 10 12:34:59 crc kubenswrapper[4689]: I1210 12:34:59.365466 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1fd20d2cfe9a0571c94e8d467713ce91944e57445f0c01ac31685eb2ab075bc" Dec 10 12:34:59 crc kubenswrapper[4689]: I1210 12:34:59.367945 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8f172078-587d-4ee6-a804-0b5168bdb3dc","Type":"ContainerStarted","Data":"f9689f9eb16ba1dce9b55f50f147a30ce8eee94b0207142f44e57ecee48d07a2"} Dec 10 12:34:59 crc kubenswrapper[4689]: I1210 12:34:59.368080 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8f172078-587d-4ee6-a804-0b5168bdb3dc" containerName="glance-log" containerID="cri-o://2e8dfe7fa3f93de94be7de4bab8e155d46b77a3c80c4d446fa3be93312ae8715" gracePeriod=30 Dec 10 12:34:59 crc kubenswrapper[4689]: I1210 12:34:59.368131 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8f172078-587d-4ee6-a804-0b5168bdb3dc" containerName="glance-httpd" containerID="cri-o://f9689f9eb16ba1dce9b55f50f147a30ce8eee94b0207142f44e57ecee48d07a2" gracePeriod=30 Dec 10 12:34:59 crc kubenswrapper[4689]: I1210 12:34:59.374818 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"55350420-790f-4f9e-8ba6-3872bd55ceb8","Type":"ContainerStarted","Data":"4cf2de413c7cf48e752c2e995d641639d47bb2476f0d844dd3eeff9fd5dd669c"} Dec 10 12:34:59 crc kubenswrapper[4689]: I1210 12:34:59.374991 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="55350420-790f-4f9e-8ba6-3872bd55ceb8" containerName="glance-log" containerID="cri-o://c8467361e3624c4d9a2432181ec402481c1c135635309d44b19e1a5259fa8e31" gracePeriod=30 Dec 10 12:34:59 crc kubenswrapper[4689]: I1210 12:34:59.375579 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="55350420-790f-4f9e-8ba6-3872bd55ceb8" containerName="glance-httpd" containerID="cri-o://4cf2de413c7cf48e752c2e995d641639d47bb2476f0d844dd3eeff9fd5dd669c" gracePeriod=30 Dec 10 12:34:59 crc kubenswrapper[4689]: I1210 12:34:59.379997 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-c5fbw" Dec 10 12:34:59 crc kubenswrapper[4689]: I1210 12:34:59.380308 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-c5fbw" event={"ID":"9880fdc4-4a6b-4353-9b05-fefd96248c09","Type":"ContainerDied","Data":"fda3aa2dbe782123442b56cc48bcd43497b3b6ab5895ff6535b9330181fa0cb0"} Dec 10 12:34:59 crc kubenswrapper[4689]: I1210 12:34:59.380354 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fda3aa2dbe782123442b56cc48bcd43497b3b6ab5895ff6535b9330181fa0cb0" Dec 10 12:34:59 crc kubenswrapper[4689]: I1210 12:34:59.400170 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.400152069 podStartE2EDuration="8.400152069s" podCreationTimestamp="2025-12-10 12:34:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:34:59.394001446 +0000 UTC m=+1167.182082584" watchObservedRunningTime="2025-12-10 12:34:59.400152069 +0000 UTC m=+1167.188233207" Dec 10 12:34:59 crc kubenswrapper[4689]: I1210 12:34:59.427895 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.427877798 podStartE2EDuration="8.427877798s" podCreationTimestamp="2025-12-10 12:34:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:34:59.417292614 +0000 UTC m=+1167.205373752" watchObservedRunningTime="2025-12-10 12:34:59.427877798 +0000 UTC m=+1167.215958936" Dec 10 12:35:00 crc kubenswrapper[4689]: I1210 12:35:00.396331 4689 generic.go:334] "Generic (PLEG): container finished" podID="55350420-790f-4f9e-8ba6-3872bd55ceb8" containerID="4cf2de413c7cf48e752c2e995d641639d47bb2476f0d844dd3eeff9fd5dd669c" exitCode=0 Dec 10 12:35:00 crc kubenswrapper[4689]: I1210 12:35:00.396377 4689 generic.go:334] "Generic (PLEG): container finished" podID="55350420-790f-4f9e-8ba6-3872bd55ceb8" containerID="c8467361e3624c4d9a2432181ec402481c1c135635309d44b19e1a5259fa8e31" exitCode=143 Dec 10 12:35:00 crc kubenswrapper[4689]: I1210 12:35:00.396385 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"55350420-790f-4f9e-8ba6-3872bd55ceb8","Type":"ContainerDied","Data":"4cf2de413c7cf48e752c2e995d641639d47bb2476f0d844dd3eeff9fd5dd669c"} Dec 10 12:35:00 crc kubenswrapper[4689]: I1210 12:35:00.396439 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"55350420-790f-4f9e-8ba6-3872bd55ceb8","Type":"ContainerDied","Data":"c8467361e3624c4d9a2432181ec402481c1c135635309d44b19e1a5259fa8e31"} Dec 10 12:35:00 crc kubenswrapper[4689]: I1210 12:35:00.401234 4689 generic.go:334] "Generic (PLEG): container finished" podID="048101b4-0993-42cb-bac6-d9384a242856" containerID="926623c39956cbbb50fcf8167fa246964bf21edafad9db44a292a6bf55cf7ddd" exitCode=0 Dec 10 12:35:00 crc kubenswrapper[4689]: I1210 12:35:00.401303 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-w7tlj" event={"ID":"048101b4-0993-42cb-bac6-d9384a242856","Type":"ContainerDied","Data":"926623c39956cbbb50fcf8167fa246964bf21edafad9db44a292a6bf55cf7ddd"} Dec 10 12:35:00 crc kubenswrapper[4689]: I1210 12:35:00.406046 4689 generic.go:334] "Generic (PLEG): container finished" podID="8f172078-587d-4ee6-a804-0b5168bdb3dc" containerID="f9689f9eb16ba1dce9b55f50f147a30ce8eee94b0207142f44e57ecee48d07a2" exitCode=0 Dec 10 12:35:00 crc kubenswrapper[4689]: I1210 12:35:00.406078 4689 generic.go:334] "Generic (PLEG): container finished" podID="8f172078-587d-4ee6-a804-0b5168bdb3dc" containerID="2e8dfe7fa3f93de94be7de4bab8e155d46b77a3c80c4d446fa3be93312ae8715" exitCode=143 Dec 10 12:35:00 crc kubenswrapper[4689]: I1210 12:35:00.406116 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8f172078-587d-4ee6-a804-0b5168bdb3dc","Type":"ContainerDied","Data":"f9689f9eb16ba1dce9b55f50f147a30ce8eee94b0207142f44e57ecee48d07a2"} Dec 10 12:35:00 crc kubenswrapper[4689]: I1210 12:35:00.406157 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8f172078-587d-4ee6-a804-0b5168bdb3dc","Type":"ContainerDied","Data":"2e8dfe7fa3f93de94be7de4bab8e155d46b77a3c80c4d446fa3be93312ae8715"} Dec 10 12:35:01 crc kubenswrapper[4689]: I1210 12:35:01.995228 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-sync-jz2g4"] Dec 10 12:35:01 crc kubenswrapper[4689]: E1210 12:35:01.996511 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a323df3-5a3b-41cb-afc0-cbd2e4933ec0" containerName="mariadb-account-create-update" Dec 10 12:35:01 crc kubenswrapper[4689]: I1210 12:35:01.996557 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a323df3-5a3b-41cb-afc0-cbd2e4933ec0" containerName="mariadb-account-create-update" Dec 10 12:35:01 crc kubenswrapper[4689]: E1210 12:35:01.996574 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0be1a0e4-d74f-4537-b184-eee8c421275a" containerName="init" Dec 10 12:35:01 crc kubenswrapper[4689]: I1210 12:35:01.996583 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="0be1a0e4-d74f-4537-b184-eee8c421275a" containerName="init" Dec 10 12:35:01 crc kubenswrapper[4689]: E1210 12:35:01.996600 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd4ece3-0d47-424e-8408-51f9fce3af5d" containerName="init" Dec 10 12:35:01 crc kubenswrapper[4689]: I1210 12:35:01.996609 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd4ece3-0d47-424e-8408-51f9fce3af5d" containerName="init" Dec 10 12:35:01 crc kubenswrapper[4689]: E1210 12:35:01.996646 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9880fdc4-4a6b-4353-9b05-fefd96248c09" containerName="mariadb-database-create" Dec 10 12:35:01 crc kubenswrapper[4689]: I1210 12:35:01.996656 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="9880fdc4-4a6b-4353-9b05-fefd96248c09" containerName="mariadb-database-create" Dec 10 12:35:01 crc kubenswrapper[4689]: E1210 12:35:01.996677 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0be1a0e4-d74f-4537-b184-eee8c421275a" containerName="dnsmasq-dns" Dec 10 12:35:01 crc kubenswrapper[4689]: I1210 12:35:01.996686 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="0be1a0e4-d74f-4537-b184-eee8c421275a" containerName="dnsmasq-dns" Dec 10 12:35:01 crc kubenswrapper[4689]: I1210 12:35:01.997289 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdd4ece3-0d47-424e-8408-51f9fce3af5d" containerName="init" Dec 10 12:35:01 crc kubenswrapper[4689]: I1210 12:35:01.997317 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="0be1a0e4-d74f-4537-b184-eee8c421275a" containerName="dnsmasq-dns" Dec 10 12:35:01 crc kubenswrapper[4689]: I1210 12:35:01.997340 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="9880fdc4-4a6b-4353-9b05-fefd96248c09" containerName="mariadb-database-create" Dec 10 12:35:01 crc kubenswrapper[4689]: I1210 12:35:01.997386 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a323df3-5a3b-41cb-afc0-cbd2e4933ec0" containerName="mariadb-account-create-update" Dec 10 12:35:02 crc kubenswrapper[4689]: I1210 12:35:02.001130 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-jz2g4" Dec 10 12:35:02 crc kubenswrapper[4689]: I1210 12:35:02.003309 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Dec 10 12:35:02 crc kubenswrapper[4689]: I1210 12:35:02.003519 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-scripts" Dec 10 12:35:02 crc kubenswrapper[4689]: I1210 12:35:02.003584 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-ironic-dockercfg-2wctm" Dec 10 12:35:02 crc kubenswrapper[4689]: I1210 12:35:02.005466 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-jz2g4"] Dec 10 12:35:02 crc kubenswrapper[4689]: I1210 12:35:02.177332 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xphhh\" (UniqueName: \"kubernetes.io/projected/574d5244-06f3-49f9-b8b8-93bd57d4fc35-kube-api-access-xphhh\") pod \"ironic-db-sync-jz2g4\" (UID: \"574d5244-06f3-49f9-b8b8-93bd57d4fc35\") " pod="openstack/ironic-db-sync-jz2g4" Dec 10 12:35:02 crc kubenswrapper[4689]: I1210 12:35:02.178944 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/574d5244-06f3-49f9-b8b8-93bd57d4fc35-scripts\") pod \"ironic-db-sync-jz2g4\" (UID: \"574d5244-06f3-49f9-b8b8-93bd57d4fc35\") " pod="openstack/ironic-db-sync-jz2g4" Dec 10 12:35:02 crc kubenswrapper[4689]: I1210 12:35:02.179031 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/574d5244-06f3-49f9-b8b8-93bd57d4fc35-config-data-merged\") pod \"ironic-db-sync-jz2g4\" (UID: \"574d5244-06f3-49f9-b8b8-93bd57d4fc35\") " pod="openstack/ironic-db-sync-jz2g4" Dec 10 12:35:02 crc kubenswrapper[4689]: I1210 12:35:02.179136 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/574d5244-06f3-49f9-b8b8-93bd57d4fc35-config-data\") pod \"ironic-db-sync-jz2g4\" (UID: \"574d5244-06f3-49f9-b8b8-93bd57d4fc35\") " pod="openstack/ironic-db-sync-jz2g4" Dec 10 12:35:02 crc kubenswrapper[4689]: I1210 12:35:02.179232 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/574d5244-06f3-49f9-b8b8-93bd57d4fc35-combined-ca-bundle\") pod \"ironic-db-sync-jz2g4\" (UID: \"574d5244-06f3-49f9-b8b8-93bd57d4fc35\") " pod="openstack/ironic-db-sync-jz2g4" Dec 10 12:35:02 crc kubenswrapper[4689]: I1210 12:35:02.179257 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/574d5244-06f3-49f9-b8b8-93bd57d4fc35-etc-podinfo\") pod \"ironic-db-sync-jz2g4\" (UID: \"574d5244-06f3-49f9-b8b8-93bd57d4fc35\") " pod="openstack/ironic-db-sync-jz2g4" Dec 10 12:35:02 crc kubenswrapper[4689]: I1210 12:35:02.281095 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/574d5244-06f3-49f9-b8b8-93bd57d4fc35-combined-ca-bundle\") pod \"ironic-db-sync-jz2g4\" (UID: \"574d5244-06f3-49f9-b8b8-93bd57d4fc35\") " pod="openstack/ironic-db-sync-jz2g4" Dec 10 12:35:02 crc kubenswrapper[4689]: I1210 12:35:02.282277 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/574d5244-06f3-49f9-b8b8-93bd57d4fc35-etc-podinfo\") pod \"ironic-db-sync-jz2g4\" (UID: \"574d5244-06f3-49f9-b8b8-93bd57d4fc35\") " pod="openstack/ironic-db-sync-jz2g4" Dec 10 12:35:02 crc kubenswrapper[4689]: I1210 12:35:02.282366 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xphhh\" (UniqueName: \"kubernetes.io/projected/574d5244-06f3-49f9-b8b8-93bd57d4fc35-kube-api-access-xphhh\") pod \"ironic-db-sync-jz2g4\" (UID: \"574d5244-06f3-49f9-b8b8-93bd57d4fc35\") " pod="openstack/ironic-db-sync-jz2g4" Dec 10 12:35:02 crc kubenswrapper[4689]: I1210 12:35:02.282543 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/574d5244-06f3-49f9-b8b8-93bd57d4fc35-scripts\") pod \"ironic-db-sync-jz2g4\" (UID: \"574d5244-06f3-49f9-b8b8-93bd57d4fc35\") " pod="openstack/ironic-db-sync-jz2g4" Dec 10 12:35:02 crc kubenswrapper[4689]: I1210 12:35:02.282617 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/574d5244-06f3-49f9-b8b8-93bd57d4fc35-config-data-merged\") pod \"ironic-db-sync-jz2g4\" (UID: \"574d5244-06f3-49f9-b8b8-93bd57d4fc35\") " pod="openstack/ironic-db-sync-jz2g4" Dec 10 12:35:02 crc kubenswrapper[4689]: I1210 12:35:02.282679 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/574d5244-06f3-49f9-b8b8-93bd57d4fc35-config-data\") pod \"ironic-db-sync-jz2g4\" (UID: \"574d5244-06f3-49f9-b8b8-93bd57d4fc35\") " pod="openstack/ironic-db-sync-jz2g4" Dec 10 12:35:02 crc kubenswrapper[4689]: I1210 12:35:02.283837 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/574d5244-06f3-49f9-b8b8-93bd57d4fc35-config-data-merged\") pod \"ironic-db-sync-jz2g4\" (UID: \"574d5244-06f3-49f9-b8b8-93bd57d4fc35\") " pod="openstack/ironic-db-sync-jz2g4" Dec 10 12:35:02 crc kubenswrapper[4689]: I1210 12:35:02.289661 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/574d5244-06f3-49f9-b8b8-93bd57d4fc35-combined-ca-bundle\") pod \"ironic-db-sync-jz2g4\" (UID: \"574d5244-06f3-49f9-b8b8-93bd57d4fc35\") " pod="openstack/ironic-db-sync-jz2g4" Dec 10 12:35:02 crc kubenswrapper[4689]: I1210 12:35:02.289880 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/574d5244-06f3-49f9-b8b8-93bd57d4fc35-config-data\") pod \"ironic-db-sync-jz2g4\" (UID: \"574d5244-06f3-49f9-b8b8-93bd57d4fc35\") " pod="openstack/ironic-db-sync-jz2g4" Dec 10 12:35:02 crc kubenswrapper[4689]: I1210 12:35:02.290231 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/574d5244-06f3-49f9-b8b8-93bd57d4fc35-scripts\") pod \"ironic-db-sync-jz2g4\" (UID: \"574d5244-06f3-49f9-b8b8-93bd57d4fc35\") " pod="openstack/ironic-db-sync-jz2g4" Dec 10 12:35:02 crc kubenswrapper[4689]: I1210 12:35:02.291874 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/574d5244-06f3-49f9-b8b8-93bd57d4fc35-etc-podinfo\") pod \"ironic-db-sync-jz2g4\" (UID: \"574d5244-06f3-49f9-b8b8-93bd57d4fc35\") " pod="openstack/ironic-db-sync-jz2g4" Dec 10 12:35:02 crc kubenswrapper[4689]: I1210 12:35:02.301685 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xphhh\" (UniqueName: \"kubernetes.io/projected/574d5244-06f3-49f9-b8b8-93bd57d4fc35-kube-api-access-xphhh\") pod \"ironic-db-sync-jz2g4\" (UID: \"574d5244-06f3-49f9-b8b8-93bd57d4fc35\") " pod="openstack/ironic-db-sync-jz2g4" Dec 10 12:35:02 crc kubenswrapper[4689]: I1210 12:35:02.331129 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-jz2g4" Dec 10 12:35:02 crc kubenswrapper[4689]: I1210 12:35:02.337140 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b5c85b87-w7sjh" Dec 10 12:35:02 crc kubenswrapper[4689]: I1210 12:35:02.398633 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-47knk"] Dec 10 12:35:02 crc kubenswrapper[4689]: I1210 12:35:02.399064 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b946c75cc-47knk" podUID="f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35" containerName="dnsmasq-dns" containerID="cri-o://71316d508dd23a3da218b19a4de4ff395eecf266f93f0bb071a559700c73d351" gracePeriod=10 Dec 10 12:35:03 crc kubenswrapper[4689]: I1210 12:35:03.435771 4689 generic.go:334] "Generic (PLEG): container finished" podID="f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35" containerID="71316d508dd23a3da218b19a4de4ff395eecf266f93f0bb071a559700c73d351" exitCode=0 Dec 10 12:35:03 crc kubenswrapper[4689]: I1210 12:35:03.435817 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-47knk" event={"ID":"f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35","Type":"ContainerDied","Data":"71316d508dd23a3da218b19a4de4ff395eecf266f93f0bb071a559700c73d351"} Dec 10 12:35:04 crc kubenswrapper[4689]: I1210 12:35:04.189589 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b946c75cc-47knk" podUID="f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: connect: connection refused" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.475530 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8f172078-587d-4ee6-a804-0b5168bdb3dc","Type":"ContainerDied","Data":"391cb163d6c633c21c0a5f1eee2d89c01362e9343307691a8ab4179cb3f6ab0a"} Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.475814 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="391cb163d6c633c21c0a5f1eee2d89c01362e9343307691a8ab4179cb3f6ab0a" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.477664 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-w7tlj" event={"ID":"048101b4-0993-42cb-bac6-d9384a242856","Type":"ContainerDied","Data":"e84ce448d3ee79acd9ac780276bbe9474ce9636e6d2e35f71b3bdac5930ec81a"} Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.477701 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e84ce448d3ee79acd9ac780276bbe9474ce9636e6d2e35f71b3bdac5930ec81a" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.501439 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.506434 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-w7tlj" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.676631 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f172078-587d-4ee6-a804-0b5168bdb3dc-httpd-run\") pod \"8f172078-587d-4ee6-a804-0b5168bdb3dc\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") " Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.676731 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/048101b4-0993-42cb-bac6-d9384a242856-credential-keys\") pod \"048101b4-0993-42cb-bac6-d9384a242856\" (UID: \"048101b4-0993-42cb-bac6-d9384a242856\") " Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.676844 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5csm\" (UniqueName: \"kubernetes.io/projected/048101b4-0993-42cb-bac6-d9384a242856-kube-api-access-b5csm\") pod \"048101b4-0993-42cb-bac6-d9384a242856\" (UID: \"048101b4-0993-42cb-bac6-d9384a242856\") " Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.676925 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f172078-587d-4ee6-a804-0b5168bdb3dc-config-data\") pod \"8f172078-587d-4ee6-a804-0b5168bdb3dc\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") " Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.676962 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f172078-587d-4ee6-a804-0b5168bdb3dc-combined-ca-bundle\") pod \"8f172078-587d-4ee6-a804-0b5168bdb3dc\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") " Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.677013 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"8f172078-587d-4ee6-a804-0b5168bdb3dc\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") " Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.677056 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/048101b4-0993-42cb-bac6-d9384a242856-scripts\") pod \"048101b4-0993-42cb-bac6-d9384a242856\" (UID: \"048101b4-0993-42cb-bac6-d9384a242856\") " Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.677111 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f172078-587d-4ee6-a804-0b5168bdb3dc-scripts\") pod \"8f172078-587d-4ee6-a804-0b5168bdb3dc\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") " Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.677117 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f172078-587d-4ee6-a804-0b5168bdb3dc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8f172078-587d-4ee6-a804-0b5168bdb3dc" (UID: "8f172078-587d-4ee6-a804-0b5168bdb3dc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.677149 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cchkq\" (UniqueName: \"kubernetes.io/projected/8f172078-587d-4ee6-a804-0b5168bdb3dc-kube-api-access-cchkq\") pod \"8f172078-587d-4ee6-a804-0b5168bdb3dc\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") " Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.677181 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f172078-587d-4ee6-a804-0b5168bdb3dc-internal-tls-certs\") pod \"8f172078-587d-4ee6-a804-0b5168bdb3dc\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") " Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.677207 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f172078-587d-4ee6-a804-0b5168bdb3dc-logs\") pod \"8f172078-587d-4ee6-a804-0b5168bdb3dc\" (UID: \"8f172078-587d-4ee6-a804-0b5168bdb3dc\") " Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.677260 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/048101b4-0993-42cb-bac6-d9384a242856-config-data\") pod \"048101b4-0993-42cb-bac6-d9384a242856\" (UID: \"048101b4-0993-42cb-bac6-d9384a242856\") " Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.677282 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/048101b4-0993-42cb-bac6-d9384a242856-combined-ca-bundle\") pod \"048101b4-0993-42cb-bac6-d9384a242856\" (UID: \"048101b4-0993-42cb-bac6-d9384a242856\") " Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.677312 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/048101b4-0993-42cb-bac6-d9384a242856-fernet-keys\") pod \"048101b4-0993-42cb-bac6-d9384a242856\" (UID: \"048101b4-0993-42cb-bac6-d9384a242856\") " Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.677838 4689 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f172078-587d-4ee6-a804-0b5168bdb3dc-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.678533 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f172078-587d-4ee6-a804-0b5168bdb3dc-logs" (OuterVolumeSpecName: "logs") pod "8f172078-587d-4ee6-a804-0b5168bdb3dc" (UID: "8f172078-587d-4ee6-a804-0b5168bdb3dc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.681929 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/048101b4-0993-42cb-bac6-d9384a242856-scripts" (OuterVolumeSpecName: "scripts") pod "048101b4-0993-42cb-bac6-d9384a242856" (UID: "048101b4-0993-42cb-bac6-d9384a242856"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.682510 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "8f172078-587d-4ee6-a804-0b5168bdb3dc" (UID: "8f172078-587d-4ee6-a804-0b5168bdb3dc"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.682944 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/048101b4-0993-42cb-bac6-d9384a242856-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "048101b4-0993-42cb-bac6-d9384a242856" (UID: "048101b4-0993-42cb-bac6-d9384a242856"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.685105 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f172078-587d-4ee6-a804-0b5168bdb3dc-kube-api-access-cchkq" (OuterVolumeSpecName: "kube-api-access-cchkq") pod "8f172078-587d-4ee6-a804-0b5168bdb3dc" (UID: "8f172078-587d-4ee6-a804-0b5168bdb3dc"). InnerVolumeSpecName "kube-api-access-cchkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.687231 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/048101b4-0993-42cb-bac6-d9384a242856-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "048101b4-0993-42cb-bac6-d9384a242856" (UID: "048101b4-0993-42cb-bac6-d9384a242856"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.705648 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f172078-587d-4ee6-a804-0b5168bdb3dc-scripts" (OuterVolumeSpecName: "scripts") pod "8f172078-587d-4ee6-a804-0b5168bdb3dc" (UID: "8f172078-587d-4ee6-a804-0b5168bdb3dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.707786 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/048101b4-0993-42cb-bac6-d9384a242856-kube-api-access-b5csm" (OuterVolumeSpecName: "kube-api-access-b5csm") pod "048101b4-0993-42cb-bac6-d9384a242856" (UID: "048101b4-0993-42cb-bac6-d9384a242856"). InnerVolumeSpecName "kube-api-access-b5csm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.727927 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/048101b4-0993-42cb-bac6-d9384a242856-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "048101b4-0993-42cb-bac6-d9384a242856" (UID: "048101b4-0993-42cb-bac6-d9384a242856"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.730094 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f172078-587d-4ee6-a804-0b5168bdb3dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f172078-587d-4ee6-a804-0b5168bdb3dc" (UID: "8f172078-587d-4ee6-a804-0b5168bdb3dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.738760 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f172078-587d-4ee6-a804-0b5168bdb3dc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8f172078-587d-4ee6-a804-0b5168bdb3dc" (UID: "8f172078-587d-4ee6-a804-0b5168bdb3dc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.753610 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/048101b4-0993-42cb-bac6-d9384a242856-config-data" (OuterVolumeSpecName: "config-data") pod "048101b4-0993-42cb-bac6-d9384a242856" (UID: "048101b4-0993-42cb-bac6-d9384a242856"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.767776 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f172078-587d-4ee6-a804-0b5168bdb3dc-config-data" (OuterVolumeSpecName: "config-data") pod "8f172078-587d-4ee6-a804-0b5168bdb3dc" (UID: "8f172078-587d-4ee6-a804-0b5168bdb3dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.779292 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f172078-587d-4ee6-a804-0b5168bdb3dc-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.779322 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cchkq\" (UniqueName: \"kubernetes.io/projected/8f172078-587d-4ee6-a804-0b5168bdb3dc-kube-api-access-cchkq\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.779334 4689 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f172078-587d-4ee6-a804-0b5168bdb3dc-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.779344 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f172078-587d-4ee6-a804-0b5168bdb3dc-logs\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.779353 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/048101b4-0993-42cb-bac6-d9384a242856-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.779362 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/048101b4-0993-42cb-bac6-d9384a242856-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.779370 4689 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/048101b4-0993-42cb-bac6-d9384a242856-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.779381 4689 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/048101b4-0993-42cb-bac6-d9384a242856-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.779389 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5csm\" (UniqueName: \"kubernetes.io/projected/048101b4-0993-42cb-bac6-d9384a242856-kube-api-access-b5csm\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.779399 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f172078-587d-4ee6-a804-0b5168bdb3dc-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.779407 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f172078-587d-4ee6-a804-0b5168bdb3dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.779436 4689 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.779445 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/048101b4-0993-42cb-bac6-d9384a242856-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.796192 4689 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Dec 10 12:35:07 crc kubenswrapper[4689]: I1210 12:35:07.881506 4689 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.484053 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-w7tlj" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.484114 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.528877 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.536643 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.569832 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 12:35:08 crc kubenswrapper[4689]: E1210 12:35:08.570261 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f172078-587d-4ee6-a804-0b5168bdb3dc" containerName="glance-httpd" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.570279 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f172078-587d-4ee6-a804-0b5168bdb3dc" containerName="glance-httpd" Dec 10 12:35:08 crc kubenswrapper[4689]: E1210 12:35:08.570290 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f172078-587d-4ee6-a804-0b5168bdb3dc" containerName="glance-log" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.570297 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f172078-587d-4ee6-a804-0b5168bdb3dc" containerName="glance-log" Dec 10 12:35:08 crc kubenswrapper[4689]: E1210 12:35:08.570331 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="048101b4-0993-42cb-bac6-d9384a242856" containerName="keystone-bootstrap" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.570340 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="048101b4-0993-42cb-bac6-d9384a242856" containerName="keystone-bootstrap" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.570499 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f172078-587d-4ee6-a804-0b5168bdb3dc" containerName="glance-log" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.570509 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="048101b4-0993-42cb-bac6-d9384a242856" containerName="keystone-bootstrap" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.570527 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f172078-587d-4ee6-a804-0b5168bdb3dc" containerName="glance-httpd" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.571560 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.583043 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.587318 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.589354 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.614962 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-w7tlj"] Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.629149 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-w7tlj"] Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.693947 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.694025 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf5e0487-a380-424f-aa29-f815b50550db-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.694128 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf5e0487-a380-424f-aa29-f815b50550db-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.694195 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf5e0487-a380-424f-aa29-f815b50550db-logs\") pod \"glance-default-internal-api-0\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.694223 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf5e0487-a380-424f-aa29-f815b50550db-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.694435 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf5e0487-a380-424f-aa29-f815b50550db-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.694628 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf5e0487-a380-424f-aa29-f815b50550db-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.694769 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44fkc\" (UniqueName: \"kubernetes.io/projected/cf5e0487-a380-424f-aa29-f815b50550db-kube-api-access-44fkc\") pod \"glance-default-internal-api-0\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.706642 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-5vjjb"] Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.707884 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5vjjb" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.710547 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.710753 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.711411 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5qbhw" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.711729 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.719508 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5vjjb"] Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.797031 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0c1f5b6-e80c-4950-9b6e-181733099c57-config-data\") pod \"keystone-bootstrap-5vjjb\" (UID: \"c0c1f5b6-e80c-4950-9b6e-181733099c57\") " pod="openstack/keystone-bootstrap-5vjjb" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.797095 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.797122 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf5e0487-a380-424f-aa29-f815b50550db-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.797175 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m2wm\" (UniqueName: \"kubernetes.io/projected/c0c1f5b6-e80c-4950-9b6e-181733099c57-kube-api-access-6m2wm\") pod \"keystone-bootstrap-5vjjb\" (UID: \"c0c1f5b6-e80c-4950-9b6e-181733099c57\") " pod="openstack/keystone-bootstrap-5vjjb" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.797225 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf5e0487-a380-424f-aa29-f815b50550db-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.797430 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c0c1f5b6-e80c-4950-9b6e-181733099c57-credential-keys\") pod \"keystone-bootstrap-5vjjb\" (UID: \"c0c1f5b6-e80c-4950-9b6e-181733099c57\") " pod="openstack/keystone-bootstrap-5vjjb" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.797461 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf5e0487-a380-424f-aa29-f815b50550db-logs\") pod \"glance-default-internal-api-0\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.797479 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c0c1f5b6-e80c-4950-9b6e-181733099c57-fernet-keys\") pod \"keystone-bootstrap-5vjjb\" (UID: \"c0c1f5b6-e80c-4950-9b6e-181733099c57\") " pod="openstack/keystone-bootstrap-5vjjb" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.797498 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf5e0487-a380-424f-aa29-f815b50550db-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.797519 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf5e0487-a380-424f-aa29-f815b50550db-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.797580 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c1f5b6-e80c-4950-9b6e-181733099c57-combined-ca-bundle\") pod \"keystone-bootstrap-5vjjb\" (UID: \"c0c1f5b6-e80c-4950-9b6e-181733099c57\") " pod="openstack/keystone-bootstrap-5vjjb" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.797603 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf5e0487-a380-424f-aa29-f815b50550db-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.797640 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0c1f5b6-e80c-4950-9b6e-181733099c57-scripts\") pod \"keystone-bootstrap-5vjjb\" (UID: \"c0c1f5b6-e80c-4950-9b6e-181733099c57\") " pod="openstack/keystone-bootstrap-5vjjb" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.797665 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44fkc\" (UniqueName: \"kubernetes.io/projected/cf5e0487-a380-424f-aa29-f815b50550db-kube-api-access-44fkc\") pod \"glance-default-internal-api-0\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.797742 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.798177 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf5e0487-a380-424f-aa29-f815b50550db-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.798263 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf5e0487-a380-424f-aa29-f815b50550db-logs\") pod \"glance-default-internal-api-0\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.803925 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf5e0487-a380-424f-aa29-f815b50550db-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.804554 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf5e0487-a380-424f-aa29-f815b50550db-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.805082 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf5e0487-a380-424f-aa29-f815b50550db-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.816526 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf5e0487-a380-424f-aa29-f815b50550db-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.818344 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44fkc\" (UniqueName: \"kubernetes.io/projected/cf5e0487-a380-424f-aa29-f815b50550db-kube-api-access-44fkc\") pod \"glance-default-internal-api-0\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.824564 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.898715 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.899206 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0c1f5b6-e80c-4950-9b6e-181733099c57-config-data\") pod \"keystone-bootstrap-5vjjb\" (UID: \"c0c1f5b6-e80c-4950-9b6e-181733099c57\") " pod="openstack/keystone-bootstrap-5vjjb" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.899274 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m2wm\" (UniqueName: \"kubernetes.io/projected/c0c1f5b6-e80c-4950-9b6e-181733099c57-kube-api-access-6m2wm\") pod \"keystone-bootstrap-5vjjb\" (UID: \"c0c1f5b6-e80c-4950-9b6e-181733099c57\") " pod="openstack/keystone-bootstrap-5vjjb" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.899306 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c0c1f5b6-e80c-4950-9b6e-181733099c57-credential-keys\") pod \"keystone-bootstrap-5vjjb\" (UID: \"c0c1f5b6-e80c-4950-9b6e-181733099c57\") " pod="openstack/keystone-bootstrap-5vjjb" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.899338 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c0c1f5b6-e80c-4950-9b6e-181733099c57-fernet-keys\") pod \"keystone-bootstrap-5vjjb\" (UID: \"c0c1f5b6-e80c-4950-9b6e-181733099c57\") " pod="openstack/keystone-bootstrap-5vjjb" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.899393 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c1f5b6-e80c-4950-9b6e-181733099c57-combined-ca-bundle\") pod \"keystone-bootstrap-5vjjb\" (UID: \"c0c1f5b6-e80c-4950-9b6e-181733099c57\") " pod="openstack/keystone-bootstrap-5vjjb" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.899418 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0c1f5b6-e80c-4950-9b6e-181733099c57-scripts\") pod \"keystone-bootstrap-5vjjb\" (UID: \"c0c1f5b6-e80c-4950-9b6e-181733099c57\") " pod="openstack/keystone-bootstrap-5vjjb" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.904748 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0c1f5b6-e80c-4950-9b6e-181733099c57-scripts\") pod \"keystone-bootstrap-5vjjb\" (UID: \"c0c1f5b6-e80c-4950-9b6e-181733099c57\") " pod="openstack/keystone-bootstrap-5vjjb" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.904918 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c0c1f5b6-e80c-4950-9b6e-181733099c57-fernet-keys\") pod \"keystone-bootstrap-5vjjb\" (UID: \"c0c1f5b6-e80c-4950-9b6e-181733099c57\") " pod="openstack/keystone-bootstrap-5vjjb" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.905228 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c1f5b6-e80c-4950-9b6e-181733099c57-combined-ca-bundle\") pod \"keystone-bootstrap-5vjjb\" (UID: \"c0c1f5b6-e80c-4950-9b6e-181733099c57\") " pod="openstack/keystone-bootstrap-5vjjb" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.906650 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c0c1f5b6-e80c-4950-9b6e-181733099c57-credential-keys\") pod \"keystone-bootstrap-5vjjb\" (UID: \"c0c1f5b6-e80c-4950-9b6e-181733099c57\") " pod="openstack/keystone-bootstrap-5vjjb" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.907630 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0c1f5b6-e80c-4950-9b6e-181733099c57-config-data\") pod \"keystone-bootstrap-5vjjb\" (UID: \"c0c1f5b6-e80c-4950-9b6e-181733099c57\") " pod="openstack/keystone-bootstrap-5vjjb" Dec 10 12:35:08 crc kubenswrapper[4689]: I1210 12:35:08.917183 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m2wm\" (UniqueName: \"kubernetes.io/projected/c0c1f5b6-e80c-4950-9b6e-181733099c57-kube-api-access-6m2wm\") pod \"keystone-bootstrap-5vjjb\" (UID: \"c0c1f5b6-e80c-4950-9b6e-181733099c57\") " pod="openstack/keystone-bootstrap-5vjjb" Dec 10 12:35:09 crc kubenswrapper[4689]: I1210 12:35:09.028998 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5vjjb" Dec 10 12:35:09 crc kubenswrapper[4689]: I1210 12:35:09.189839 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b946c75cc-47knk" podUID="f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: connect: connection refused" Dec 10 12:35:10 crc kubenswrapper[4689]: I1210 12:35:10.507144 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="048101b4-0993-42cb-bac6-d9384a242856" path="/var/lib/kubelet/pods/048101b4-0993-42cb-bac6-d9384a242856/volumes" Dec 10 12:35:10 crc kubenswrapper[4689]: I1210 12:35:10.508766 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f172078-587d-4ee6-a804-0b5168bdb3dc" path="/var/lib/kubelet/pods/8f172078-587d-4ee6-a804-0b5168bdb3dc/volumes" Dec 10 12:35:14 crc kubenswrapper[4689]: I1210 12:35:14.189960 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b946c75cc-47knk" podUID="f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: connect: connection refused" Dec 10 12:35:14 crc kubenswrapper[4689]: I1210 12:35:14.190389 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b946c75cc-47knk" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.037298 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.131145 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55350420-790f-4f9e-8ba6-3872bd55ceb8-scripts\") pod \"55350420-790f-4f9e-8ba6-3872bd55ceb8\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") " Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.131249 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55350420-790f-4f9e-8ba6-3872bd55ceb8-public-tls-certs\") pod \"55350420-790f-4f9e-8ba6-3872bd55ceb8\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") " Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.131426 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55350420-790f-4f9e-8ba6-3872bd55ceb8-logs\") pod \"55350420-790f-4f9e-8ba6-3872bd55ceb8\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") " Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.131456 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55350420-790f-4f9e-8ba6-3872bd55ceb8-httpd-run\") pod \"55350420-790f-4f9e-8ba6-3872bd55ceb8\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") " Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.131559 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"55350420-790f-4f9e-8ba6-3872bd55ceb8\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") " Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.131605 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rxmh\" (UniqueName: \"kubernetes.io/projected/55350420-790f-4f9e-8ba6-3872bd55ceb8-kube-api-access-7rxmh\") pod \"55350420-790f-4f9e-8ba6-3872bd55ceb8\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") " Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.131650 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55350420-790f-4f9e-8ba6-3872bd55ceb8-combined-ca-bundle\") pod \"55350420-790f-4f9e-8ba6-3872bd55ceb8\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") " Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.131675 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55350420-790f-4f9e-8ba6-3872bd55ceb8-config-data\") pod \"55350420-790f-4f9e-8ba6-3872bd55ceb8\" (UID: \"55350420-790f-4f9e-8ba6-3872bd55ceb8\") " Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.139342 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55350420-790f-4f9e-8ba6-3872bd55ceb8-scripts" (OuterVolumeSpecName: "scripts") pod "55350420-790f-4f9e-8ba6-3872bd55ceb8" (UID: "55350420-790f-4f9e-8ba6-3872bd55ceb8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.139728 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55350420-790f-4f9e-8ba6-3872bd55ceb8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "55350420-790f-4f9e-8ba6-3872bd55ceb8" (UID: "55350420-790f-4f9e-8ba6-3872bd55ceb8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.139926 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55350420-790f-4f9e-8ba6-3872bd55ceb8-logs" (OuterVolumeSpecName: "logs") pod "55350420-790f-4f9e-8ba6-3872bd55ceb8" (UID: "55350420-790f-4f9e-8ba6-3872bd55ceb8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.143069 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "55350420-790f-4f9e-8ba6-3872bd55ceb8" (UID: "55350420-790f-4f9e-8ba6-3872bd55ceb8"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.143137 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55350420-790f-4f9e-8ba6-3872bd55ceb8-kube-api-access-7rxmh" (OuterVolumeSpecName: "kube-api-access-7rxmh") pod "55350420-790f-4f9e-8ba6-3872bd55ceb8" (UID: "55350420-790f-4f9e-8ba6-3872bd55ceb8"). InnerVolumeSpecName "kube-api-access-7rxmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.191544 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55350420-790f-4f9e-8ba6-3872bd55ceb8-config-data" (OuterVolumeSpecName: "config-data") pod "55350420-790f-4f9e-8ba6-3872bd55ceb8" (UID: "55350420-790f-4f9e-8ba6-3872bd55ceb8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.192150 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55350420-790f-4f9e-8ba6-3872bd55ceb8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55350420-790f-4f9e-8ba6-3872bd55ceb8" (UID: "55350420-790f-4f9e-8ba6-3872bd55ceb8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.195098 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55350420-790f-4f9e-8ba6-3872bd55ceb8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "55350420-790f-4f9e-8ba6-3872bd55ceb8" (UID: "55350420-790f-4f9e-8ba6-3872bd55ceb8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.237640 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55350420-790f-4f9e-8ba6-3872bd55ceb8-logs\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.237703 4689 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55350420-790f-4f9e-8ba6-3872bd55ceb8-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.237745 4689 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.237756 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rxmh\" (UniqueName: \"kubernetes.io/projected/55350420-790f-4f9e-8ba6-3872bd55ceb8-kube-api-access-7rxmh\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.237771 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55350420-790f-4f9e-8ba6-3872bd55ceb8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.237783 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55350420-790f-4f9e-8ba6-3872bd55ceb8-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.237791 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55350420-790f-4f9e-8ba6-3872bd55ceb8-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.237803 4689 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55350420-790f-4f9e-8ba6-3872bd55ceb8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.253474 4689 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.339225 4689 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.559121 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"55350420-790f-4f9e-8ba6-3872bd55ceb8","Type":"ContainerDied","Data":"40cd5674915e25b25713bb52496d8058231eb10acb4765a06a6dcfb20241df9e"} Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.559752 4689 scope.go:117] "RemoveContainer" containerID="4cf2de413c7cf48e752c2e995d641639d47bb2476f0d844dd3eeff9fd5dd669c" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.560190 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.604115 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.615642 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.625162 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 12:35:16 crc kubenswrapper[4689]: E1210 12:35:16.625519 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55350420-790f-4f9e-8ba6-3872bd55ceb8" containerName="glance-log" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.625536 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="55350420-790f-4f9e-8ba6-3872bd55ceb8" containerName="glance-log" Dec 10 12:35:16 crc kubenswrapper[4689]: E1210 12:35:16.625571 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55350420-790f-4f9e-8ba6-3872bd55ceb8" containerName="glance-httpd" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.625577 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="55350420-790f-4f9e-8ba6-3872bd55ceb8" containerName="glance-httpd" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.625732 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="55350420-790f-4f9e-8ba6-3872bd55ceb8" containerName="glance-httpd" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.625759 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="55350420-790f-4f9e-8ba6-3872bd55ceb8" containerName="glance-log" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.637225 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.640506 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.643317 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.659436 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.744823 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-logs\") pod \"glance-default-external-api-0\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") " pod="openstack/glance-default-external-api-0" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.745046 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-scripts\") pod \"glance-default-external-api-0\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") " pod="openstack/glance-default-external-api-0" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.745090 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkwvg\" (UniqueName: \"kubernetes.io/projected/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-kube-api-access-xkwvg\") pod \"glance-default-external-api-0\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") " pod="openstack/glance-default-external-api-0" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.745155 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") " pod="openstack/glance-default-external-api-0" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.745180 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") " pod="openstack/glance-default-external-api-0" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.745339 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") " pod="openstack/glance-default-external-api-0" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.745396 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") " pod="openstack/glance-default-external-api-0" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.745461 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-config-data\") pod \"glance-default-external-api-0\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") " pod="openstack/glance-default-external-api-0" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.847480 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-scripts\") pod \"glance-default-external-api-0\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") " pod="openstack/glance-default-external-api-0" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.848208 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkwvg\" (UniqueName: \"kubernetes.io/projected/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-kube-api-access-xkwvg\") pod \"glance-default-external-api-0\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") " pod="openstack/glance-default-external-api-0" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.848320 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") " pod="openstack/glance-default-external-api-0" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.848416 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") " pod="openstack/glance-default-external-api-0" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.848501 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") " pod="openstack/glance-default-external-api-0" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.848565 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") " pod="openstack/glance-default-external-api-0" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.848643 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-config-data\") pod \"glance-default-external-api-0\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") " pod="openstack/glance-default-external-api-0" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.848787 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-logs\") pod \"glance-default-external-api-0\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") " pod="openstack/glance-default-external-api-0" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.848826 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") " pod="openstack/glance-default-external-api-0" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.848870 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.849556 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-logs\") pod \"glance-default-external-api-0\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") " pod="openstack/glance-default-external-api-0" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.852033 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-scripts\") pod \"glance-default-external-api-0\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") " pod="openstack/glance-default-external-api-0" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.853533 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") " pod="openstack/glance-default-external-api-0" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.853551 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") " pod="openstack/glance-default-external-api-0" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.853766 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-config-data\") pod \"glance-default-external-api-0\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") " pod="openstack/glance-default-external-api-0" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.867616 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkwvg\" (UniqueName: \"kubernetes.io/projected/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-kube-api-access-xkwvg\") pod \"glance-default-external-api-0\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") " pod="openstack/glance-default-external-api-0" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.873536 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") " pod="openstack/glance-default-external-api-0" Dec 10 12:35:16 crc kubenswrapper[4689]: I1210 12:35:16.955915 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 12:35:17 crc kubenswrapper[4689]: E1210 12:35:17.211082 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 10 12:35:17 crc kubenswrapper[4689]: E1210 12:35:17.211242 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lt4h9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-57ffb_openstack(eb62506f-d5a5-44b0-8da3-125128211e10): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 12:35:17 crc kubenswrapper[4689]: E1210 12:35:17.212661 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-57ffb" podUID="eb62506f-d5a5-44b0-8da3-125128211e10" Dec 10 12:35:17 crc kubenswrapper[4689]: I1210 12:35:17.223391 4689 scope.go:117] "RemoveContainer" containerID="c8467361e3624c4d9a2432181ec402481c1c135635309d44b19e1a5259fa8e31" Dec 10 12:35:17 crc kubenswrapper[4689]: I1210 12:35:17.540868 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-47knk" Dec 10 12:35:17 crc kubenswrapper[4689]: I1210 12:35:17.588435 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-47knk" event={"ID":"f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35","Type":"ContainerDied","Data":"b53052e62a7867cebefeca23c079760d45bb251566109920c3afa33de84bf7ec"} Dec 10 12:35:17 crc kubenswrapper[4689]: I1210 12:35:17.588481 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-47knk" Dec 10 12:35:17 crc kubenswrapper[4689]: I1210 12:35:17.588499 4689 scope.go:117] "RemoveContainer" containerID="71316d508dd23a3da218b19a4de4ff395eecf266f93f0bb071a559700c73d351" Dec 10 12:35:17 crc kubenswrapper[4689]: E1210 12:35:17.590302 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-57ffb" podUID="eb62506f-d5a5-44b0-8da3-125128211e10" Dec 10 12:35:17 crc kubenswrapper[4689]: I1210 12:35:17.619232 4689 scope.go:117] "RemoveContainer" containerID="d77c04c19a19fc030efd89aa6cfa9fc23646f0d174be21e755febe67b1fba4dd" Dec 10 12:35:17 crc kubenswrapper[4689]: I1210 12:35:17.662311 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35-ovsdbserver-nb\") pod \"f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35\" (UID: \"f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35\") " Dec 10 12:35:17 crc kubenswrapper[4689]: I1210 12:35:17.662677 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35-config\") pod \"f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35\" (UID: \"f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35\") " Dec 10 12:35:17 crc kubenswrapper[4689]: I1210 12:35:17.662703 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35-ovsdbserver-sb\") pod \"f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35\" (UID: \"f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35\") " Dec 10 12:35:17 crc kubenswrapper[4689]: I1210 12:35:17.662821 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35-dns-svc\") pod \"f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35\" (UID: \"f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35\") " Dec 10 12:35:17 crc kubenswrapper[4689]: I1210 12:35:17.662849 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vbmk\" (UniqueName: \"kubernetes.io/projected/f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35-kube-api-access-7vbmk\") pod \"f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35\" (UID: \"f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35\") " Dec 10 12:35:17 crc kubenswrapper[4689]: I1210 12:35:17.677140 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35-kube-api-access-7vbmk" (OuterVolumeSpecName: "kube-api-access-7vbmk") pod "f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35" (UID: "f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35"). InnerVolumeSpecName "kube-api-access-7vbmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:35:17 crc kubenswrapper[4689]: I1210 12:35:17.707396 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-jz2g4"] Dec 10 12:35:17 crc kubenswrapper[4689]: I1210 12:35:17.726429 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35" (UID: "f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:35:17 crc kubenswrapper[4689]: I1210 12:35:17.730599 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35-config" (OuterVolumeSpecName: "config") pod "f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35" (UID: "f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:35:17 crc kubenswrapper[4689]: I1210 12:35:17.737396 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35" (UID: "f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:35:17 crc kubenswrapper[4689]: I1210 12:35:17.750152 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35" (UID: "f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:35:17 crc kubenswrapper[4689]: I1210 12:35:17.765023 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:17 crc kubenswrapper[4689]: I1210 12:35:17.765063 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:17 crc kubenswrapper[4689]: I1210 12:35:17.765075 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:17 crc kubenswrapper[4689]: I1210 12:35:17.765085 4689 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:17 crc kubenswrapper[4689]: I1210 12:35:17.765096 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vbmk\" (UniqueName: \"kubernetes.io/projected/f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35-kube-api-access-7vbmk\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:17 crc kubenswrapper[4689]: W1210 12:35:17.790590 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf5e0487_a380_424f_aa29_f815b50550db.slice/crio-7609f65d481dc88fc595f1dbf6ba25da9a6220b5fff5ba5dcf82ea74a51fe895 WatchSource:0}: Error finding container 7609f65d481dc88fc595f1dbf6ba25da9a6220b5fff5ba5dcf82ea74a51fe895: Status 404 returned error can't find the container with id 7609f65d481dc88fc595f1dbf6ba25da9a6220b5fff5ba5dcf82ea74a51fe895 Dec 10 12:35:17 crc kubenswrapper[4689]: I1210 12:35:17.791117 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 12:35:17 crc kubenswrapper[4689]: I1210 12:35:17.836648 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5vjjb"] Dec 10 12:35:17 crc kubenswrapper[4689]: I1210 12:35:17.917437 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 12:35:18 crc kubenswrapper[4689]: I1210 12:35:18.063197 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-47knk"] Dec 10 12:35:18 crc kubenswrapper[4689]: I1210 12:35:18.073907 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-47knk"] Dec 10 12:35:18 crc kubenswrapper[4689]: I1210 12:35:18.513438 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55350420-790f-4f9e-8ba6-3872bd55ceb8" path="/var/lib/kubelet/pods/55350420-790f-4f9e-8ba6-3872bd55ceb8/volumes" Dec 10 12:35:18 crc kubenswrapper[4689]: I1210 12:35:18.514735 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35" path="/var/lib/kubelet/pods/f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35/volumes" Dec 10 12:35:18 crc kubenswrapper[4689]: I1210 12:35:18.602082 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c","Type":"ContainerStarted","Data":"04acef4caa91113cb7fa3a016eba65086b09c90d1343a0fae707547215a21338"} Dec 10 12:35:18 crc kubenswrapper[4689]: I1210 12:35:18.602125 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c","Type":"ContainerStarted","Data":"db046a134769a99000c0fc3914d4138d965fd8c037c767e9f7fe2ebd49508bb3"} Dec 10 12:35:18 crc kubenswrapper[4689]: I1210 12:35:18.606459 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cf5e0487-a380-424f-aa29-f815b50550db","Type":"ContainerStarted","Data":"2a55af2cef7a7e1d0e6e21d2022cf74765424cde599a0aa0a053d82425f86759"} Dec 10 12:35:18 crc kubenswrapper[4689]: I1210 12:35:18.606519 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cf5e0487-a380-424f-aa29-f815b50550db","Type":"ContainerStarted","Data":"7609f65d481dc88fc595f1dbf6ba25da9a6220b5fff5ba5dcf82ea74a51fe895"} Dec 10 12:35:18 crc kubenswrapper[4689]: I1210 12:35:18.609443 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dkrw7" event={"ID":"e10d322a-3fb7-451d-9a38-f2659e3d32e5","Type":"ContainerStarted","Data":"25b21492f606c454d60e2873c4318710c21760174d91f0adb977e3ce74ea3174"} Dec 10 12:35:18 crc kubenswrapper[4689]: I1210 12:35:18.611465 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-jz2g4" event={"ID":"574d5244-06f3-49f9-b8b8-93bd57d4fc35","Type":"ContainerStarted","Data":"f13285d3307bd7bd46a281f37cb8a619c1b3c5f14efa71162b8cb468e43ae83e"} Dec 10 12:35:18 crc kubenswrapper[4689]: I1210 12:35:18.616578 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7xjms" event={"ID":"ae5c2d84-08cd-462f-b8d5-ba416353f365","Type":"ContainerStarted","Data":"9da03645378f4c973322bd49c9e58d79583b7fc7d26fc3b956f141b25b1c401b"} Dec 10 12:35:18 crc kubenswrapper[4689]: I1210 12:35:18.619292 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5vjjb" event={"ID":"c0c1f5b6-e80c-4950-9b6e-181733099c57","Type":"ContainerStarted","Data":"182e5847d4d581a645c0b6bcdbb1ce522d424a83fdd9ae24ea604dedec83ffd9"} Dec 10 12:35:18 crc kubenswrapper[4689]: I1210 12:35:18.619335 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5vjjb" event={"ID":"c0c1f5b6-e80c-4950-9b6e-181733099c57","Type":"ContainerStarted","Data":"e129c7089f174bbf58cc39813a902b97dd50e72ad49daaa99f9dc0a357c5d291"} Dec 10 12:35:18 crc kubenswrapper[4689]: I1210 12:35:18.627080 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-dkrw7" podStartSLOduration=6.7153664840000005 podStartE2EDuration="27.627060624s" podCreationTimestamp="2025-12-10 12:34:51 +0000 UTC" firstStartedPulling="2025-12-10 12:34:56.254499688 +0000 UTC m=+1164.042580826" lastFinishedPulling="2025-12-10 12:35:17.166193828 +0000 UTC m=+1184.954274966" observedRunningTime="2025-12-10 12:35:18.62528495 +0000 UTC m=+1186.413366088" watchObservedRunningTime="2025-12-10 12:35:18.627060624 +0000 UTC m=+1186.415141782" Dec 10 12:35:18 crc kubenswrapper[4689]: I1210 12:35:18.631314 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12","Type":"ContainerStarted","Data":"954224d40b7f240c0cce06b7020134601928aad1e5d6674925099e62f2668576"} Dec 10 12:35:18 crc kubenswrapper[4689]: I1210 12:35:18.645817 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-5vjjb" podStartSLOduration=10.645798859 podStartE2EDuration="10.645798859s" podCreationTimestamp="2025-12-10 12:35:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:35:18.640603561 +0000 UTC m=+1186.428684699" watchObservedRunningTime="2025-12-10 12:35:18.645798859 +0000 UTC m=+1186.433879997" Dec 10 12:35:18 crc kubenswrapper[4689]: I1210 12:35:18.685662 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-7xjms" podStartSLOduration=6.693324067 podStartE2EDuration="27.685641599s" podCreationTimestamp="2025-12-10 12:34:51 +0000 UTC" firstStartedPulling="2025-12-10 12:34:56.192256293 +0000 UTC m=+1163.980337431" lastFinishedPulling="2025-12-10 12:35:17.184573835 +0000 UTC m=+1184.972654963" observedRunningTime="2025-12-10 12:35:18.658434993 +0000 UTC m=+1186.446516141" watchObservedRunningTime="2025-12-10 12:35:18.685641599 +0000 UTC m=+1186.473722737" Dec 10 12:35:19 crc kubenswrapper[4689]: I1210 12:35:19.659836 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cf5e0487-a380-424f-aa29-f815b50550db","Type":"ContainerStarted","Data":"eba3388b6217b672df9bdd08c3a6c98eb3af7448cf48f3d2ee833c4519993a83"} Dec 10 12:35:19 crc kubenswrapper[4689]: I1210 12:35:19.687839 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=11.687819597 podStartE2EDuration="11.687819597s" podCreationTimestamp="2025-12-10 12:35:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:35:19.684503505 +0000 UTC m=+1187.472584643" watchObservedRunningTime="2025-12-10 12:35:19.687819597 +0000 UTC m=+1187.475900735" Dec 10 12:35:21 crc kubenswrapper[4689]: I1210 12:35:21.685663 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c","Type":"ContainerStarted","Data":"f3fc097cd14dd750789d612bb5c1f1ac70f37fcc67df585c29527ae53325bc07"} Dec 10 12:35:21 crc kubenswrapper[4689]: I1210 12:35:21.689645 4689 generic.go:334] "Generic (PLEG): container finished" podID="c0c1f5b6-e80c-4950-9b6e-181733099c57" containerID="182e5847d4d581a645c0b6bcdbb1ce522d424a83fdd9ae24ea604dedec83ffd9" exitCode=0 Dec 10 12:35:21 crc kubenswrapper[4689]: I1210 12:35:21.689696 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5vjjb" event={"ID":"c0c1f5b6-e80c-4950-9b6e-181733099c57","Type":"ContainerDied","Data":"182e5847d4d581a645c0b6bcdbb1ce522d424a83fdd9ae24ea604dedec83ffd9"} Dec 10 12:35:21 crc kubenswrapper[4689]: I1210 12:35:21.711788 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.711759431 podStartE2EDuration="5.711759431s" podCreationTimestamp="2025-12-10 12:35:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:35:21.705940617 +0000 UTC m=+1189.494021775" watchObservedRunningTime="2025-12-10 12:35:21.711759431 +0000 UTC m=+1189.499840599" Dec 10 12:35:22 crc kubenswrapper[4689]: I1210 12:35:22.699729 4689 generic.go:334] "Generic (PLEG): container finished" podID="e10d322a-3fb7-451d-9a38-f2659e3d32e5" containerID="25b21492f606c454d60e2873c4318710c21760174d91f0adb977e3ce74ea3174" exitCode=0 Dec 10 12:35:22 crc kubenswrapper[4689]: I1210 12:35:22.699814 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dkrw7" event={"ID":"e10d322a-3fb7-451d-9a38-f2659e3d32e5","Type":"ContainerDied","Data":"25b21492f606c454d60e2873c4318710c21760174d91f0adb977e3ce74ea3174"} Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.201589 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5vjjb" Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.369556 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c0c1f5b6-e80c-4950-9b6e-181733099c57-credential-keys\") pod \"c0c1f5b6-e80c-4950-9b6e-181733099c57\" (UID: \"c0c1f5b6-e80c-4950-9b6e-181733099c57\") " Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.369604 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0c1f5b6-e80c-4950-9b6e-181733099c57-config-data\") pod \"c0c1f5b6-e80c-4950-9b6e-181733099c57\" (UID: \"c0c1f5b6-e80c-4950-9b6e-181733099c57\") " Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.369666 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0c1f5b6-e80c-4950-9b6e-181733099c57-scripts\") pod \"c0c1f5b6-e80c-4950-9b6e-181733099c57\" (UID: \"c0c1f5b6-e80c-4950-9b6e-181733099c57\") " Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.369695 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c0c1f5b6-e80c-4950-9b6e-181733099c57-fernet-keys\") pod \"c0c1f5b6-e80c-4950-9b6e-181733099c57\" (UID: \"c0c1f5b6-e80c-4950-9b6e-181733099c57\") " Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.369735 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m2wm\" (UniqueName: \"kubernetes.io/projected/c0c1f5b6-e80c-4950-9b6e-181733099c57-kube-api-access-6m2wm\") pod \"c0c1f5b6-e80c-4950-9b6e-181733099c57\" (UID: \"c0c1f5b6-e80c-4950-9b6e-181733099c57\") " Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.369768 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c1f5b6-e80c-4950-9b6e-181733099c57-combined-ca-bundle\") pod \"c0c1f5b6-e80c-4950-9b6e-181733099c57\" (UID: \"c0c1f5b6-e80c-4950-9b6e-181733099c57\") " Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.375110 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c1f5b6-e80c-4950-9b6e-181733099c57-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c0c1f5b6-e80c-4950-9b6e-181733099c57" (UID: "c0c1f5b6-e80c-4950-9b6e-181733099c57"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.375159 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0c1f5b6-e80c-4950-9b6e-181733099c57-kube-api-access-6m2wm" (OuterVolumeSpecName: "kube-api-access-6m2wm") pod "c0c1f5b6-e80c-4950-9b6e-181733099c57" (UID: "c0c1f5b6-e80c-4950-9b6e-181733099c57"). InnerVolumeSpecName "kube-api-access-6m2wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.375602 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c1f5b6-e80c-4950-9b6e-181733099c57-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c0c1f5b6-e80c-4950-9b6e-181733099c57" (UID: "c0c1f5b6-e80c-4950-9b6e-181733099c57"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.375634 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c1f5b6-e80c-4950-9b6e-181733099c57-scripts" (OuterVolumeSpecName: "scripts") pod "c0c1f5b6-e80c-4950-9b6e-181733099c57" (UID: "c0c1f5b6-e80c-4950-9b6e-181733099c57"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.395808 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c1f5b6-e80c-4950-9b6e-181733099c57-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0c1f5b6-e80c-4950-9b6e-181733099c57" (UID: "c0c1f5b6-e80c-4950-9b6e-181733099c57"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.401588 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c1f5b6-e80c-4950-9b6e-181733099c57-config-data" (OuterVolumeSpecName: "config-data") pod "c0c1f5b6-e80c-4950-9b6e-181733099c57" (UID: "c0c1f5b6-e80c-4950-9b6e-181733099c57"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.472011 4689 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c0c1f5b6-e80c-4950-9b6e-181733099c57-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.472045 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0c1f5b6-e80c-4950-9b6e-181733099c57-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.472056 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0c1f5b6-e80c-4950-9b6e-181733099c57-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.472065 4689 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c0c1f5b6-e80c-4950-9b6e-181733099c57-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.472074 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m2wm\" (UniqueName: \"kubernetes.io/projected/c0c1f5b6-e80c-4950-9b6e-181733099c57-kube-api-access-6m2wm\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.472084 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c1f5b6-e80c-4950-9b6e-181733099c57-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.715657 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5vjjb" Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.715707 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5vjjb" event={"ID":"c0c1f5b6-e80c-4950-9b6e-181733099c57","Type":"ContainerDied","Data":"e129c7089f174bbf58cc39813a902b97dd50e72ad49daaa99f9dc0a357c5d291"} Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.715761 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e129c7089f174bbf58cc39813a902b97dd50e72ad49daaa99f9dc0a357c5d291" Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.929432 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-85bc68c5bb-jxqf7"] Dec 10 12:35:23 crc kubenswrapper[4689]: E1210 12:35:23.930150 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35" containerName="dnsmasq-dns" Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.930170 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35" containerName="dnsmasq-dns" Dec 10 12:35:23 crc kubenswrapper[4689]: E1210 12:35:23.930212 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35" containerName="init" Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.930222 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35" containerName="init" Dec 10 12:35:23 crc kubenswrapper[4689]: E1210 12:35:23.930240 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0c1f5b6-e80c-4950-9b6e-181733099c57" containerName="keystone-bootstrap" Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.930247 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c1f5b6-e80c-4950-9b6e-181733099c57" containerName="keystone-bootstrap" Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.930453 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="f21b10f6-e1dc-44d8-a88f-5b5acb7c4e35" containerName="dnsmasq-dns" Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.930476 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0c1f5b6-e80c-4950-9b6e-181733099c57" containerName="keystone-bootstrap" Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.931246 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-85bc68c5bb-jxqf7" Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.934839 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.935180 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.935422 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.935708 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.935851 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.936020 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5qbhw" Dec 10 12:35:23 crc kubenswrapper[4689]: I1210 12:35:23.954984 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-85bc68c5bb-jxqf7"] Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.083504 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ab8f9bd-1d66-4142-afe8-1cfce8e5f736-internal-tls-certs\") pod \"keystone-85bc68c5bb-jxqf7\" (UID: \"5ab8f9bd-1d66-4142-afe8-1cfce8e5f736\") " pod="openstack/keystone-85bc68c5bb-jxqf7" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.083547 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ab8f9bd-1d66-4142-afe8-1cfce8e5f736-config-data\") pod \"keystone-85bc68c5bb-jxqf7\" (UID: \"5ab8f9bd-1d66-4142-afe8-1cfce8e5f736\") " pod="openstack/keystone-85bc68c5bb-jxqf7" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.083569 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ab8f9bd-1d66-4142-afe8-1cfce8e5f736-public-tls-certs\") pod \"keystone-85bc68c5bb-jxqf7\" (UID: \"5ab8f9bd-1d66-4142-afe8-1cfce8e5f736\") " pod="openstack/keystone-85bc68c5bb-jxqf7" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.083595 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5ab8f9bd-1d66-4142-afe8-1cfce8e5f736-fernet-keys\") pod \"keystone-85bc68c5bb-jxqf7\" (UID: \"5ab8f9bd-1d66-4142-afe8-1cfce8e5f736\") " pod="openstack/keystone-85bc68c5bb-jxqf7" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.083629 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ab8f9bd-1d66-4142-afe8-1cfce8e5f736-combined-ca-bundle\") pod \"keystone-85bc68c5bb-jxqf7\" (UID: \"5ab8f9bd-1d66-4142-afe8-1cfce8e5f736\") " pod="openstack/keystone-85bc68c5bb-jxqf7" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.083701 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vww7g\" (UniqueName: \"kubernetes.io/projected/5ab8f9bd-1d66-4142-afe8-1cfce8e5f736-kube-api-access-vww7g\") pod \"keystone-85bc68c5bb-jxqf7\" (UID: \"5ab8f9bd-1d66-4142-afe8-1cfce8e5f736\") " pod="openstack/keystone-85bc68c5bb-jxqf7" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.083730 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ab8f9bd-1d66-4142-afe8-1cfce8e5f736-scripts\") pod \"keystone-85bc68c5bb-jxqf7\" (UID: \"5ab8f9bd-1d66-4142-afe8-1cfce8e5f736\") " pod="openstack/keystone-85bc68c5bb-jxqf7" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.083770 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5ab8f9bd-1d66-4142-afe8-1cfce8e5f736-credential-keys\") pod \"keystone-85bc68c5bb-jxqf7\" (UID: \"5ab8f9bd-1d66-4142-afe8-1cfce8e5f736\") " pod="openstack/keystone-85bc68c5bb-jxqf7" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.184835 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5ab8f9bd-1d66-4142-afe8-1cfce8e5f736-credential-keys\") pod \"keystone-85bc68c5bb-jxqf7\" (UID: \"5ab8f9bd-1d66-4142-afe8-1cfce8e5f736\") " pod="openstack/keystone-85bc68c5bb-jxqf7" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.184897 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ab8f9bd-1d66-4142-afe8-1cfce8e5f736-internal-tls-certs\") pod \"keystone-85bc68c5bb-jxqf7\" (UID: \"5ab8f9bd-1d66-4142-afe8-1cfce8e5f736\") " pod="openstack/keystone-85bc68c5bb-jxqf7" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.184920 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ab8f9bd-1d66-4142-afe8-1cfce8e5f736-config-data\") pod \"keystone-85bc68c5bb-jxqf7\" (UID: \"5ab8f9bd-1d66-4142-afe8-1cfce8e5f736\") " pod="openstack/keystone-85bc68c5bb-jxqf7" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.184941 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ab8f9bd-1d66-4142-afe8-1cfce8e5f736-public-tls-certs\") pod \"keystone-85bc68c5bb-jxqf7\" (UID: \"5ab8f9bd-1d66-4142-afe8-1cfce8e5f736\") " pod="openstack/keystone-85bc68c5bb-jxqf7" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.184984 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5ab8f9bd-1d66-4142-afe8-1cfce8e5f736-fernet-keys\") pod \"keystone-85bc68c5bb-jxqf7\" (UID: \"5ab8f9bd-1d66-4142-afe8-1cfce8e5f736\") " pod="openstack/keystone-85bc68c5bb-jxqf7" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.185018 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ab8f9bd-1d66-4142-afe8-1cfce8e5f736-combined-ca-bundle\") pod \"keystone-85bc68c5bb-jxqf7\" (UID: \"5ab8f9bd-1d66-4142-afe8-1cfce8e5f736\") " pod="openstack/keystone-85bc68c5bb-jxqf7" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.185055 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vww7g\" (UniqueName: \"kubernetes.io/projected/5ab8f9bd-1d66-4142-afe8-1cfce8e5f736-kube-api-access-vww7g\") pod \"keystone-85bc68c5bb-jxqf7\" (UID: \"5ab8f9bd-1d66-4142-afe8-1cfce8e5f736\") " pod="openstack/keystone-85bc68c5bb-jxqf7" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.185080 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ab8f9bd-1d66-4142-afe8-1cfce8e5f736-scripts\") pod \"keystone-85bc68c5bb-jxqf7\" (UID: \"5ab8f9bd-1d66-4142-afe8-1cfce8e5f736\") " pod="openstack/keystone-85bc68c5bb-jxqf7" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.189642 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ab8f9bd-1d66-4142-afe8-1cfce8e5f736-scripts\") pod \"keystone-85bc68c5bb-jxqf7\" (UID: \"5ab8f9bd-1d66-4142-afe8-1cfce8e5f736\") " pod="openstack/keystone-85bc68c5bb-jxqf7" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.190227 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ab8f9bd-1d66-4142-afe8-1cfce8e5f736-internal-tls-certs\") pod \"keystone-85bc68c5bb-jxqf7\" (UID: \"5ab8f9bd-1d66-4142-afe8-1cfce8e5f736\") " pod="openstack/keystone-85bc68c5bb-jxqf7" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.190356 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ab8f9bd-1d66-4142-afe8-1cfce8e5f736-combined-ca-bundle\") pod \"keystone-85bc68c5bb-jxqf7\" (UID: \"5ab8f9bd-1d66-4142-afe8-1cfce8e5f736\") " pod="openstack/keystone-85bc68c5bb-jxqf7" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.191734 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5ab8f9bd-1d66-4142-afe8-1cfce8e5f736-fernet-keys\") pod \"keystone-85bc68c5bb-jxqf7\" (UID: \"5ab8f9bd-1d66-4142-afe8-1cfce8e5f736\") " pod="openstack/keystone-85bc68c5bb-jxqf7" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.195549 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ab8f9bd-1d66-4142-afe8-1cfce8e5f736-config-data\") pod \"keystone-85bc68c5bb-jxqf7\" (UID: \"5ab8f9bd-1d66-4142-afe8-1cfce8e5f736\") " pod="openstack/keystone-85bc68c5bb-jxqf7" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.195900 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5ab8f9bd-1d66-4142-afe8-1cfce8e5f736-credential-keys\") pod \"keystone-85bc68c5bb-jxqf7\" (UID: \"5ab8f9bd-1d66-4142-afe8-1cfce8e5f736\") " pod="openstack/keystone-85bc68c5bb-jxqf7" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.200551 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ab8f9bd-1d66-4142-afe8-1cfce8e5f736-public-tls-certs\") pod \"keystone-85bc68c5bb-jxqf7\" (UID: \"5ab8f9bd-1d66-4142-afe8-1cfce8e5f736\") " pod="openstack/keystone-85bc68c5bb-jxqf7" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.205280 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vww7g\" (UniqueName: \"kubernetes.io/projected/5ab8f9bd-1d66-4142-afe8-1cfce8e5f736-kube-api-access-vww7g\") pod \"keystone-85bc68c5bb-jxqf7\" (UID: \"5ab8f9bd-1d66-4142-afe8-1cfce8e5f736\") " pod="openstack/keystone-85bc68c5bb-jxqf7" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.311407 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-85bc68c5bb-jxqf7" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.735881 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dkrw7" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.737226 4689 generic.go:334] "Generic (PLEG): container finished" podID="79d420d1-6ba7-4cf2-9e13-b046a65d378c" containerID="d38bf7b66608e6c3d7f782ed0b0e1e26001a4bbea79285f0ebc5819b4503ac2d" exitCode=0 Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.737312 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kb47t" event={"ID":"79d420d1-6ba7-4cf2-9e13-b046a65d378c","Type":"ContainerDied","Data":"d38bf7b66608e6c3d7f782ed0b0e1e26001a4bbea79285f0ebc5819b4503ac2d"} Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.743846 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dkrw7" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.744131 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dkrw7" event={"ID":"e10d322a-3fb7-451d-9a38-f2659e3d32e5","Type":"ContainerDied","Data":"3f96f731e226d703cb10e43f761eec0d9a3fc4886629b87edd0e7caccc24096c"} Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.744183 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f96f731e226d703cb10e43f761eec0d9a3fc4886629b87edd0e7caccc24096c" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.747182 4689 generic.go:334] "Generic (PLEG): container finished" podID="ae5c2d84-08cd-462f-b8d5-ba416353f365" containerID="9da03645378f4c973322bd49c9e58d79583b7fc7d26fc3b956f141b25b1c401b" exitCode=0 Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.747215 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7xjms" event={"ID":"ae5c2d84-08cd-462f-b8d5-ba416353f365","Type":"ContainerDied","Data":"9da03645378f4c973322bd49c9e58d79583b7fc7d26fc3b956f141b25b1c401b"} Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.802875 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e10d322a-3fb7-451d-9a38-f2659e3d32e5-combined-ca-bundle\") pod \"e10d322a-3fb7-451d-9a38-f2659e3d32e5\" (UID: \"e10d322a-3fb7-451d-9a38-f2659e3d32e5\") " Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.802923 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e10d322a-3fb7-451d-9a38-f2659e3d32e5-logs\") pod \"e10d322a-3fb7-451d-9a38-f2659e3d32e5\" (UID: \"e10d322a-3fb7-451d-9a38-f2659e3d32e5\") " Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.803001 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e10d322a-3fb7-451d-9a38-f2659e3d32e5-scripts\") pod \"e10d322a-3fb7-451d-9a38-f2659e3d32e5\" (UID: \"e10d322a-3fb7-451d-9a38-f2659e3d32e5\") " Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.803089 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e10d322a-3fb7-451d-9a38-f2659e3d32e5-config-data\") pod \"e10d322a-3fb7-451d-9a38-f2659e3d32e5\" (UID: \"e10d322a-3fb7-451d-9a38-f2659e3d32e5\") " Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.803118 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk2g9\" (UniqueName: \"kubernetes.io/projected/e10d322a-3fb7-451d-9a38-f2659e3d32e5-kube-api-access-qk2g9\") pod \"e10d322a-3fb7-451d-9a38-f2659e3d32e5\" (UID: \"e10d322a-3fb7-451d-9a38-f2659e3d32e5\") " Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.820422 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e10d322a-3fb7-451d-9a38-f2659e3d32e5-kube-api-access-qk2g9" (OuterVolumeSpecName: "kube-api-access-qk2g9") pod "e10d322a-3fb7-451d-9a38-f2659e3d32e5" (UID: "e10d322a-3fb7-451d-9a38-f2659e3d32e5"). InnerVolumeSpecName "kube-api-access-qk2g9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.824224 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e10d322a-3fb7-451d-9a38-f2659e3d32e5-logs" (OuterVolumeSpecName: "logs") pod "e10d322a-3fb7-451d-9a38-f2659e3d32e5" (UID: "e10d322a-3fb7-451d-9a38-f2659e3d32e5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.830205 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e10d322a-3fb7-451d-9a38-f2659e3d32e5-scripts" (OuterVolumeSpecName: "scripts") pod "e10d322a-3fb7-451d-9a38-f2659e3d32e5" (UID: "e10d322a-3fb7-451d-9a38-f2659e3d32e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.859406 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e10d322a-3fb7-451d-9a38-f2659e3d32e5-config-data" (OuterVolumeSpecName: "config-data") pod "e10d322a-3fb7-451d-9a38-f2659e3d32e5" (UID: "e10d322a-3fb7-451d-9a38-f2659e3d32e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.870797 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e10d322a-3fb7-451d-9a38-f2659e3d32e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e10d322a-3fb7-451d-9a38-f2659e3d32e5" (UID: "e10d322a-3fb7-451d-9a38-f2659e3d32e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.905218 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e10d322a-3fb7-451d-9a38-f2659e3d32e5-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.905254 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk2g9\" (UniqueName: \"kubernetes.io/projected/e10d322a-3fb7-451d-9a38-f2659e3d32e5-kube-api-access-qk2g9\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.905266 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e10d322a-3fb7-451d-9a38-f2659e3d32e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.905277 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e10d322a-3fb7-451d-9a38-f2659e3d32e5-logs\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:24 crc kubenswrapper[4689]: I1210 12:35:24.905287 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e10d322a-3fb7-451d-9a38-f2659e3d32e5-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:25 crc kubenswrapper[4689]: I1210 12:35:25.144279 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-85bc68c5bb-jxqf7"] Dec 10 12:35:25 crc kubenswrapper[4689]: W1210 12:35:25.145561 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ab8f9bd_1d66_4142_afe8_1cfce8e5f736.slice/crio-9c70071f3cdf7891d2a282cce312fee9a7f478365c30cfde0c789f027fd49e35 WatchSource:0}: Error finding container 9c70071f3cdf7891d2a282cce312fee9a7f478365c30cfde0c789f027fd49e35: Status 404 returned error can't find the container with id 9c70071f3cdf7891d2a282cce312fee9a7f478365c30cfde0c789f027fd49e35 Dec 10 12:35:25 crc kubenswrapper[4689]: I1210 12:35:25.757178 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12","Type":"ContainerStarted","Data":"5ac0afc4326b546594b5c3642b4fd2287a1df9e69c249162c16348cdd2e33149"} Dec 10 12:35:25 crc kubenswrapper[4689]: I1210 12:35:25.759147 4689 generic.go:334] "Generic (PLEG): container finished" podID="574d5244-06f3-49f9-b8b8-93bd57d4fc35" containerID="028865beb2e867ba8d6245b6fad600deef923033b87a0d8894f947d386a2b11f" exitCode=0 Dec 10 12:35:25 crc kubenswrapper[4689]: I1210 12:35:25.759214 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-jz2g4" event={"ID":"574d5244-06f3-49f9-b8b8-93bd57d4fc35","Type":"ContainerDied","Data":"028865beb2e867ba8d6245b6fad600deef923033b87a0d8894f947d386a2b11f"} Dec 10 12:35:25 crc kubenswrapper[4689]: I1210 12:35:25.760865 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-85bc68c5bb-jxqf7" event={"ID":"5ab8f9bd-1d66-4142-afe8-1cfce8e5f736","Type":"ContainerStarted","Data":"195f70fa333d02ce40423bc05f01adf0b55ce08737fda93dfe52c63864a2da94"} Dec 10 12:35:25 crc kubenswrapper[4689]: I1210 12:35:25.760905 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-85bc68c5bb-jxqf7" event={"ID":"5ab8f9bd-1d66-4142-afe8-1cfce8e5f736","Type":"ContainerStarted","Data":"9c70071f3cdf7891d2a282cce312fee9a7f478365c30cfde0c789f027fd49e35"} Dec 10 12:35:25 crc kubenswrapper[4689]: I1210 12:35:25.761275 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-85bc68c5bb-jxqf7" Dec 10 12:35:25 crc kubenswrapper[4689]: I1210 12:35:25.827143 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-85bc68c5bb-jxqf7" podStartSLOduration=2.8271178040000002 podStartE2EDuration="2.827117804s" podCreationTimestamp="2025-12-10 12:35:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:35:25.807060747 +0000 UTC m=+1193.595141905" watchObservedRunningTime="2025-12-10 12:35:25.827117804 +0000 UTC m=+1193.615198962" Dec 10 12:35:25 crc kubenswrapper[4689]: I1210 12:35:25.861379 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-b85ffb7d4-zq54p"] Dec 10 12:35:25 crc kubenswrapper[4689]: E1210 12:35:25.861803 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e10d322a-3fb7-451d-9a38-f2659e3d32e5" containerName="placement-db-sync" Dec 10 12:35:25 crc kubenswrapper[4689]: I1210 12:35:25.861826 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="e10d322a-3fb7-451d-9a38-f2659e3d32e5" containerName="placement-db-sync" Dec 10 12:35:25 crc kubenswrapper[4689]: I1210 12:35:25.862104 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="e10d322a-3fb7-451d-9a38-f2659e3d32e5" containerName="placement-db-sync" Dec 10 12:35:25 crc kubenswrapper[4689]: I1210 12:35:25.863040 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b85ffb7d4-zq54p" Dec 10 12:35:25 crc kubenswrapper[4689]: I1210 12:35:25.864554 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-ck5kx" Dec 10 12:35:25 crc kubenswrapper[4689]: I1210 12:35:25.865865 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 10 12:35:25 crc kubenswrapper[4689]: I1210 12:35:25.865885 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 10 12:35:25 crc kubenswrapper[4689]: I1210 12:35:25.866054 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 10 12:35:25 crc kubenswrapper[4689]: I1210 12:35:25.866098 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 10 12:35:25 crc kubenswrapper[4689]: I1210 12:35:25.873793 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b85ffb7d4-zq54p"] Dec 10 12:35:25 crc kubenswrapper[4689]: I1210 12:35:25.929834 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6708e3ec-a080-4f8d-a9c5-2821ea678717-internal-tls-certs\") pod \"placement-b85ffb7d4-zq54p\" (UID: \"6708e3ec-a080-4f8d-a9c5-2821ea678717\") " pod="openstack/placement-b85ffb7d4-zq54p" Dec 10 12:35:25 crc kubenswrapper[4689]: I1210 12:35:25.930090 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6708e3ec-a080-4f8d-a9c5-2821ea678717-public-tls-certs\") pod \"placement-b85ffb7d4-zq54p\" (UID: \"6708e3ec-a080-4f8d-a9c5-2821ea678717\") " pod="openstack/placement-b85ffb7d4-zq54p" Dec 10 12:35:25 crc kubenswrapper[4689]: I1210 12:35:25.930149 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6708e3ec-a080-4f8d-a9c5-2821ea678717-combined-ca-bundle\") pod \"placement-b85ffb7d4-zq54p\" (UID: \"6708e3ec-a080-4f8d-a9c5-2821ea678717\") " pod="openstack/placement-b85ffb7d4-zq54p" Dec 10 12:35:25 crc kubenswrapper[4689]: I1210 12:35:25.930201 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6708e3ec-a080-4f8d-a9c5-2821ea678717-config-data\") pod \"placement-b85ffb7d4-zq54p\" (UID: \"6708e3ec-a080-4f8d-a9c5-2821ea678717\") " pod="openstack/placement-b85ffb7d4-zq54p" Dec 10 12:35:25 crc kubenswrapper[4689]: I1210 12:35:25.930220 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6708e3ec-a080-4f8d-a9c5-2821ea678717-logs\") pod \"placement-b85ffb7d4-zq54p\" (UID: \"6708e3ec-a080-4f8d-a9c5-2821ea678717\") " pod="openstack/placement-b85ffb7d4-zq54p" Dec 10 12:35:25 crc kubenswrapper[4689]: I1210 12:35:25.930240 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6708e3ec-a080-4f8d-a9c5-2821ea678717-scripts\") pod \"placement-b85ffb7d4-zq54p\" (UID: \"6708e3ec-a080-4f8d-a9c5-2821ea678717\") " pod="openstack/placement-b85ffb7d4-zq54p" Dec 10 12:35:25 crc kubenswrapper[4689]: I1210 12:35:25.930267 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf8kh\" (UniqueName: \"kubernetes.io/projected/6708e3ec-a080-4f8d-a9c5-2821ea678717-kube-api-access-sf8kh\") pod \"placement-b85ffb7d4-zq54p\" (UID: \"6708e3ec-a080-4f8d-a9c5-2821ea678717\") " pod="openstack/placement-b85ffb7d4-zq54p" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.031884 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf8kh\" (UniqueName: \"kubernetes.io/projected/6708e3ec-a080-4f8d-a9c5-2821ea678717-kube-api-access-sf8kh\") pod \"placement-b85ffb7d4-zq54p\" (UID: \"6708e3ec-a080-4f8d-a9c5-2821ea678717\") " pod="openstack/placement-b85ffb7d4-zq54p" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.031987 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6708e3ec-a080-4f8d-a9c5-2821ea678717-internal-tls-certs\") pod \"placement-b85ffb7d4-zq54p\" (UID: \"6708e3ec-a080-4f8d-a9c5-2821ea678717\") " pod="openstack/placement-b85ffb7d4-zq54p" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.032010 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6708e3ec-a080-4f8d-a9c5-2821ea678717-public-tls-certs\") pod \"placement-b85ffb7d4-zq54p\" (UID: \"6708e3ec-a080-4f8d-a9c5-2821ea678717\") " pod="openstack/placement-b85ffb7d4-zq54p" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.032054 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6708e3ec-a080-4f8d-a9c5-2821ea678717-combined-ca-bundle\") pod \"placement-b85ffb7d4-zq54p\" (UID: \"6708e3ec-a080-4f8d-a9c5-2821ea678717\") " pod="openstack/placement-b85ffb7d4-zq54p" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.032112 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6708e3ec-a080-4f8d-a9c5-2821ea678717-config-data\") pod \"placement-b85ffb7d4-zq54p\" (UID: \"6708e3ec-a080-4f8d-a9c5-2821ea678717\") " pod="openstack/placement-b85ffb7d4-zq54p" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.032143 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6708e3ec-a080-4f8d-a9c5-2821ea678717-logs\") pod \"placement-b85ffb7d4-zq54p\" (UID: \"6708e3ec-a080-4f8d-a9c5-2821ea678717\") " pod="openstack/placement-b85ffb7d4-zq54p" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.032164 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6708e3ec-a080-4f8d-a9c5-2821ea678717-scripts\") pod \"placement-b85ffb7d4-zq54p\" (UID: \"6708e3ec-a080-4f8d-a9c5-2821ea678717\") " pod="openstack/placement-b85ffb7d4-zq54p" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.034200 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6708e3ec-a080-4f8d-a9c5-2821ea678717-logs\") pod \"placement-b85ffb7d4-zq54p\" (UID: \"6708e3ec-a080-4f8d-a9c5-2821ea678717\") " pod="openstack/placement-b85ffb7d4-zq54p" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.035937 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6708e3ec-a080-4f8d-a9c5-2821ea678717-scripts\") pod \"placement-b85ffb7d4-zq54p\" (UID: \"6708e3ec-a080-4f8d-a9c5-2821ea678717\") " pod="openstack/placement-b85ffb7d4-zq54p" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.036115 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6708e3ec-a080-4f8d-a9c5-2821ea678717-public-tls-certs\") pod \"placement-b85ffb7d4-zq54p\" (UID: \"6708e3ec-a080-4f8d-a9c5-2821ea678717\") " pod="openstack/placement-b85ffb7d4-zq54p" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.037852 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6708e3ec-a080-4f8d-a9c5-2821ea678717-config-data\") pod \"placement-b85ffb7d4-zq54p\" (UID: \"6708e3ec-a080-4f8d-a9c5-2821ea678717\") " pod="openstack/placement-b85ffb7d4-zq54p" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.038250 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6708e3ec-a080-4f8d-a9c5-2821ea678717-combined-ca-bundle\") pod \"placement-b85ffb7d4-zq54p\" (UID: \"6708e3ec-a080-4f8d-a9c5-2821ea678717\") " pod="openstack/placement-b85ffb7d4-zq54p" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.038405 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6708e3ec-a080-4f8d-a9c5-2821ea678717-internal-tls-certs\") pod \"placement-b85ffb7d4-zq54p\" (UID: \"6708e3ec-a080-4f8d-a9c5-2821ea678717\") " pod="openstack/placement-b85ffb7d4-zq54p" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.051430 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf8kh\" (UniqueName: \"kubernetes.io/projected/6708e3ec-a080-4f8d-a9c5-2821ea678717-kube-api-access-sf8kh\") pod \"placement-b85ffb7d4-zq54p\" (UID: \"6708e3ec-a080-4f8d-a9c5-2821ea678717\") " pod="openstack/placement-b85ffb7d4-zq54p" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.258104 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b85ffb7d4-zq54p" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.356221 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7xjms" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.370862 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kb47t" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.539192 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/79d420d1-6ba7-4cf2-9e13-b046a65d378c-config\") pod \"79d420d1-6ba7-4cf2-9e13-b046a65d378c\" (UID: \"79d420d1-6ba7-4cf2-9e13-b046a65d378c\") " Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.539253 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ae5c2d84-08cd-462f-b8d5-ba416353f365-db-sync-config-data\") pod \"ae5c2d84-08cd-462f-b8d5-ba416353f365\" (UID: \"ae5c2d84-08cd-462f-b8d5-ba416353f365\") " Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.539332 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae5c2d84-08cd-462f-b8d5-ba416353f365-combined-ca-bundle\") pod \"ae5c2d84-08cd-462f-b8d5-ba416353f365\" (UID: \"ae5c2d84-08cd-462f-b8d5-ba416353f365\") " Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.539377 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t2p7\" (UniqueName: \"kubernetes.io/projected/ae5c2d84-08cd-462f-b8d5-ba416353f365-kube-api-access-6t2p7\") pod \"ae5c2d84-08cd-462f-b8d5-ba416353f365\" (UID: \"ae5c2d84-08cd-462f-b8d5-ba416353f365\") " Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.539422 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d420d1-6ba7-4cf2-9e13-b046a65d378c-combined-ca-bundle\") pod \"79d420d1-6ba7-4cf2-9e13-b046a65d378c\" (UID: \"79d420d1-6ba7-4cf2-9e13-b046a65d378c\") " Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.539444 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrqxh\" (UniqueName: \"kubernetes.io/projected/79d420d1-6ba7-4cf2-9e13-b046a65d378c-kube-api-access-hrqxh\") pod \"79d420d1-6ba7-4cf2-9e13-b046a65d378c\" (UID: \"79d420d1-6ba7-4cf2-9e13-b046a65d378c\") " Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.545755 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae5c2d84-08cd-462f-b8d5-ba416353f365-kube-api-access-6t2p7" (OuterVolumeSpecName: "kube-api-access-6t2p7") pod "ae5c2d84-08cd-462f-b8d5-ba416353f365" (UID: "ae5c2d84-08cd-462f-b8d5-ba416353f365"). InnerVolumeSpecName "kube-api-access-6t2p7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.546087 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae5c2d84-08cd-462f-b8d5-ba416353f365-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ae5c2d84-08cd-462f-b8d5-ba416353f365" (UID: "ae5c2d84-08cd-462f-b8d5-ba416353f365"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.555317 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79d420d1-6ba7-4cf2-9e13-b046a65d378c-kube-api-access-hrqxh" (OuterVolumeSpecName: "kube-api-access-hrqxh") pod "79d420d1-6ba7-4cf2-9e13-b046a65d378c" (UID: "79d420d1-6ba7-4cf2-9e13-b046a65d378c"). InnerVolumeSpecName "kube-api-access-hrqxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.567214 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d420d1-6ba7-4cf2-9e13-b046a65d378c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79d420d1-6ba7-4cf2-9e13-b046a65d378c" (UID: "79d420d1-6ba7-4cf2-9e13-b046a65d378c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.572359 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae5c2d84-08cd-462f-b8d5-ba416353f365-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae5c2d84-08cd-462f-b8d5-ba416353f365" (UID: "ae5c2d84-08cd-462f-b8d5-ba416353f365"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.580275 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d420d1-6ba7-4cf2-9e13-b046a65d378c-config" (OuterVolumeSpecName: "config") pod "79d420d1-6ba7-4cf2-9e13-b046a65d378c" (UID: "79d420d1-6ba7-4cf2-9e13-b046a65d378c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.641508 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/79d420d1-6ba7-4cf2-9e13-b046a65d378c-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.641556 4689 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ae5c2d84-08cd-462f-b8d5-ba416353f365-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.641612 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae5c2d84-08cd-462f-b8d5-ba416353f365-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.641626 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6t2p7\" (UniqueName: \"kubernetes.io/projected/ae5c2d84-08cd-462f-b8d5-ba416353f365-kube-api-access-6t2p7\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.641639 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d420d1-6ba7-4cf2-9e13-b046a65d378c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.641650 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrqxh\" (UniqueName: \"kubernetes.io/projected/79d420d1-6ba7-4cf2-9e13-b046a65d378c-kube-api-access-hrqxh\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.705946 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b85ffb7d4-zq54p"] Dec 10 12:35:26 crc kubenswrapper[4689]: W1210 12:35:26.717074 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6708e3ec_a080_4f8d_a9c5_2821ea678717.slice/crio-0b12b9b82af68c6b9a62b026d3a1117078b1108fe994cdc000b35d24f9908178 WatchSource:0}: Error finding container 0b12b9b82af68c6b9a62b026d3a1117078b1108fe994cdc000b35d24f9908178: Status 404 returned error can't find the container with id 0b12b9b82af68c6b9a62b026d3a1117078b1108fe994cdc000b35d24f9908178 Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.771555 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7xjms" event={"ID":"ae5c2d84-08cd-462f-b8d5-ba416353f365","Type":"ContainerDied","Data":"fd0e9f96f428a8ece5e066305085dee4c12d92dd9614b5415c98d83ab2881027"} Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.771607 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd0e9f96f428a8ece5e066305085dee4c12d92dd9614b5415c98d83ab2881027" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.771566 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7xjms" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.773456 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kb47t" event={"ID":"79d420d1-6ba7-4cf2-9e13-b046a65d378c","Type":"ContainerDied","Data":"8c7205ffcef8e458ff85def13cfeea6941176a2fab0ddae26d4242fc39552cc9"} Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.773506 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c7205ffcef8e458ff85def13cfeea6941176a2fab0ddae26d4242fc39552cc9" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.773574 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kb47t" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.779551 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b85ffb7d4-zq54p" event={"ID":"6708e3ec-a080-4f8d-a9c5-2821ea678717","Type":"ContainerStarted","Data":"0b12b9b82af68c6b9a62b026d3a1117078b1108fe994cdc000b35d24f9908178"} Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.782895 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-jz2g4" event={"ID":"574d5244-06f3-49f9-b8b8-93bd57d4fc35","Type":"ContainerStarted","Data":"11599a88a6ff52bec2ccbea1860cf6463f5692cad44368c00d008c96f2b97990"} Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.830605 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-db-sync-jz2g4" podStartSLOduration=18.876883091 podStartE2EDuration="25.830580505s" podCreationTimestamp="2025-12-10 12:35:01 +0000 UTC" firstStartedPulling="2025-12-10 12:35:17.722779086 +0000 UTC m=+1185.510860224" lastFinishedPulling="2025-12-10 12:35:24.6764765 +0000 UTC m=+1192.464557638" observedRunningTime="2025-12-10 12:35:26.820468195 +0000 UTC m=+1194.608549353" watchObservedRunningTime="2025-12-10 12:35:26.830580505 +0000 UTC m=+1194.618661653" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.972238 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 10 12:35:26 crc kubenswrapper[4689]: I1210 12:35:26.972763 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.023248 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.031557 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.129475 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-vbprm"] Dec 10 12:35:27 crc kubenswrapper[4689]: E1210 12:35:27.129988 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae5c2d84-08cd-462f-b8d5-ba416353f365" containerName="barbican-db-sync" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.130002 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae5c2d84-08cd-462f-b8d5-ba416353f365" containerName="barbican-db-sync" Dec 10 12:35:27 crc kubenswrapper[4689]: E1210 12:35:27.130025 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d420d1-6ba7-4cf2-9e13-b046a65d378c" containerName="neutron-db-sync" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.130031 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d420d1-6ba7-4cf2-9e13-b046a65d378c" containerName="neutron-db-sync" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.130197 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae5c2d84-08cd-462f-b8d5-ba416353f365" containerName="barbican-db-sync" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.130208 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="79d420d1-6ba7-4cf2-9e13-b046a65d378c" containerName="neutron-db-sync" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.131152 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-vbprm" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.201460 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6b8cb45d75-89dpw"] Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.203092 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6b8cb45d75-89dpw" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.224459 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.224679 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.224826 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-t4vth" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.269044 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-vbprm"] Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.278694 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-55656d7776-js9xr"] Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.280467 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55656d7776-js9xr" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.296654 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-hlp4d" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.296993 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.299701 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.300056 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.313141 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/476c7a2b-2994-4383-b811-5c1bd0e7999e-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-vbprm\" (UID: \"476c7a2b-2994-4383-b811-5c1bd0e7999e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vbprm" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.314193 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/476c7a2b-2994-4383-b811-5c1bd0e7999e-config\") pod \"dnsmasq-dns-84b966f6c9-vbprm\" (UID: \"476c7a2b-2994-4383-b811-5c1bd0e7999e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vbprm" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.314303 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/476c7a2b-2994-4383-b811-5c1bd0e7999e-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-vbprm\" (UID: \"476c7a2b-2994-4383-b811-5c1bd0e7999e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vbprm" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.314474 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/476c7a2b-2994-4383-b811-5c1bd0e7999e-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-vbprm\" (UID: \"476c7a2b-2994-4383-b811-5c1bd0e7999e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vbprm" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.314506 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/476c7a2b-2994-4383-b811-5c1bd0e7999e-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-vbprm\" (UID: \"476c7a2b-2994-4383-b811-5c1bd0e7999e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vbprm" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.316962 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbp2z\" (UniqueName: \"kubernetes.io/projected/476c7a2b-2994-4383-b811-5c1bd0e7999e-kube-api-access-dbp2z\") pod \"dnsmasq-dns-84b966f6c9-vbprm\" (UID: \"476c7a2b-2994-4383-b811-5c1bd0e7999e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vbprm" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.348298 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6b8cb45d75-89dpw"] Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.369886 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7869fbbf6d-5mmw9"] Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.372145 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7869fbbf6d-5mmw9" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.377383 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.397027 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-55656d7776-js9xr"] Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.418812 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8fe1b3ff-4655-40d9-94cf-9d99dd1066db-config\") pod \"neutron-55656d7776-js9xr\" (UID: \"8fe1b3ff-4655-40d9-94cf-9d99dd1066db\") " pod="openstack/neutron-55656d7776-js9xr" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.418854 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a9faa39-da67-436d-884a-06d93286633e-logs\") pod \"barbican-worker-6b8cb45d75-89dpw\" (UID: \"5a9faa39-da67-436d-884a-06d93286633e\") " pod="openstack/barbican-worker-6b8cb45d75-89dpw" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.418881 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/476c7a2b-2994-4383-b811-5c1bd0e7999e-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-vbprm\" (UID: \"476c7a2b-2994-4383-b811-5c1bd0e7999e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vbprm" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.419071 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9faa39-da67-436d-884a-06d93286633e-combined-ca-bundle\") pod \"barbican-worker-6b8cb45d75-89dpw\" (UID: \"5a9faa39-da67-436d-884a-06d93286633e\") " pod="openstack/barbican-worker-6b8cb45d75-89dpw" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.419200 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a9faa39-da67-436d-884a-06d93286633e-config-data-custom\") pod \"barbican-worker-6b8cb45d75-89dpw\" (UID: \"5a9faa39-da67-436d-884a-06d93286633e\") " pod="openstack/barbican-worker-6b8cb45d75-89dpw" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.419228 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a9faa39-da67-436d-884a-06d93286633e-config-data\") pod \"barbican-worker-6b8cb45d75-89dpw\" (UID: \"5a9faa39-da67-436d-884a-06d93286633e\") " pod="openstack/barbican-worker-6b8cb45d75-89dpw" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.419246 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg8lv\" (UniqueName: \"kubernetes.io/projected/8fe1b3ff-4655-40d9-94cf-9d99dd1066db-kube-api-access-cg8lv\") pod \"neutron-55656d7776-js9xr\" (UID: \"8fe1b3ff-4655-40d9-94cf-9d99dd1066db\") " pod="openstack/neutron-55656d7776-js9xr" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.419291 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/476c7a2b-2994-4383-b811-5c1bd0e7999e-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-vbprm\" (UID: \"476c7a2b-2994-4383-b811-5c1bd0e7999e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vbprm" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.419318 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/476c7a2b-2994-4383-b811-5c1bd0e7999e-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-vbprm\" (UID: \"476c7a2b-2994-4383-b811-5c1bd0e7999e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vbprm" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.419374 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxx8q\" (UniqueName: \"kubernetes.io/projected/5a9faa39-da67-436d-884a-06d93286633e-kube-api-access-vxx8q\") pod \"barbican-worker-6b8cb45d75-89dpw\" (UID: \"5a9faa39-da67-436d-884a-06d93286633e\") " pod="openstack/barbican-worker-6b8cb45d75-89dpw" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.419469 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbp2z\" (UniqueName: \"kubernetes.io/projected/476c7a2b-2994-4383-b811-5c1bd0e7999e-kube-api-access-dbp2z\") pod \"dnsmasq-dns-84b966f6c9-vbprm\" (UID: \"476c7a2b-2994-4383-b811-5c1bd0e7999e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vbprm" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.419494 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8fe1b3ff-4655-40d9-94cf-9d99dd1066db-httpd-config\") pod \"neutron-55656d7776-js9xr\" (UID: \"8fe1b3ff-4655-40d9-94cf-9d99dd1066db\") " pod="openstack/neutron-55656d7776-js9xr" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.419534 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/476c7a2b-2994-4383-b811-5c1bd0e7999e-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-vbprm\" (UID: \"476c7a2b-2994-4383-b811-5c1bd0e7999e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vbprm" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.419562 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/476c7a2b-2994-4383-b811-5c1bd0e7999e-config\") pod \"dnsmasq-dns-84b966f6c9-vbprm\" (UID: \"476c7a2b-2994-4383-b811-5c1bd0e7999e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vbprm" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.419578 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fe1b3ff-4655-40d9-94cf-9d99dd1066db-combined-ca-bundle\") pod \"neutron-55656d7776-js9xr\" (UID: \"8fe1b3ff-4655-40d9-94cf-9d99dd1066db\") " pod="openstack/neutron-55656d7776-js9xr" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.419620 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fe1b3ff-4655-40d9-94cf-9d99dd1066db-ovndb-tls-certs\") pod \"neutron-55656d7776-js9xr\" (UID: \"8fe1b3ff-4655-40d9-94cf-9d99dd1066db\") " pod="openstack/neutron-55656d7776-js9xr" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.419660 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/476c7a2b-2994-4383-b811-5c1bd0e7999e-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-vbprm\" (UID: \"476c7a2b-2994-4383-b811-5c1bd0e7999e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vbprm" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.420425 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/476c7a2b-2994-4383-b811-5c1bd0e7999e-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-vbprm\" (UID: \"476c7a2b-2994-4383-b811-5c1bd0e7999e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vbprm" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.420485 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7869fbbf6d-5mmw9"] Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.420592 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/476c7a2b-2994-4383-b811-5c1bd0e7999e-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-vbprm\" (UID: \"476c7a2b-2994-4383-b811-5c1bd0e7999e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vbprm" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.421536 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/476c7a2b-2994-4383-b811-5c1bd0e7999e-config\") pod \"dnsmasq-dns-84b966f6c9-vbprm\" (UID: \"476c7a2b-2994-4383-b811-5c1bd0e7999e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vbprm" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.421659 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/476c7a2b-2994-4383-b811-5c1bd0e7999e-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-vbprm\" (UID: \"476c7a2b-2994-4383-b811-5c1bd0e7999e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vbprm" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.428009 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-vbprm"] Dec 10 12:35:27 crc kubenswrapper[4689]: E1210 12:35:27.428496 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-dbp2z], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-84b966f6c9-vbprm" podUID="476c7a2b-2994-4383-b811-5c1bd0e7999e" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.445961 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-x4qc2"] Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.447558 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-x4qc2" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.456418 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-x4qc2"] Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.463578 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbp2z\" (UniqueName: \"kubernetes.io/projected/476c7a2b-2994-4383-b811-5c1bd0e7999e-kube-api-access-dbp2z\") pod \"dnsmasq-dns-84b966f6c9-vbprm\" (UID: \"476c7a2b-2994-4383-b811-5c1bd0e7999e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vbprm" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.480713 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7b98b8dd66-xfv7n"] Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.482435 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b98b8dd66-xfv7n" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.489853 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.496840 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7b98b8dd66-xfv7n"] Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.523751 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26140423-1fa8-498b-b1da-487de5d0635f-combined-ca-bundle\") pod \"barbican-api-7b98b8dd66-xfv7n\" (UID: \"26140423-1fa8-498b-b1da-487de5d0635f\") " pod="openstack/barbican-api-7b98b8dd66-xfv7n" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.524242 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fe1b3ff-4655-40d9-94cf-9d99dd1066db-combined-ca-bundle\") pod \"neutron-55656d7776-js9xr\" (UID: \"8fe1b3ff-4655-40d9-94cf-9d99dd1066db\") " pod="openstack/neutron-55656d7776-js9xr" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.524346 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d2effe7-e7d4-417b-8446-6466eda6c94c-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-x4qc2\" (UID: \"1d2effe7-e7d4-417b-8446-6466eda6c94c\") " pod="openstack/dnsmasq-dns-75c8ddd69c-x4qc2" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.524463 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26140423-1fa8-498b-b1da-487de5d0635f-config-data-custom\") pod \"barbican-api-7b98b8dd66-xfv7n\" (UID: \"26140423-1fa8-498b-b1da-487de5d0635f\") " pod="openstack/barbican-api-7b98b8dd66-xfv7n" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.524588 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fe1b3ff-4655-40d9-94cf-9d99dd1066db-ovndb-tls-certs\") pod \"neutron-55656d7776-js9xr\" (UID: \"8fe1b3ff-4655-40d9-94cf-9d99dd1066db\") " pod="openstack/neutron-55656d7776-js9xr" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.524774 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d2effe7-e7d4-417b-8446-6466eda6c94c-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-x4qc2\" (UID: \"1d2effe7-e7d4-417b-8446-6466eda6c94c\") " pod="openstack/dnsmasq-dns-75c8ddd69c-x4qc2" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.524874 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgf7m\" (UniqueName: \"kubernetes.io/projected/0ecdef57-40ee-46b4-a739-3f8fd2354018-kube-api-access-sgf7m\") pod \"barbican-keystone-listener-7869fbbf6d-5mmw9\" (UID: \"0ecdef57-40ee-46b4-a739-3f8fd2354018\") " pod="openstack/barbican-keystone-listener-7869fbbf6d-5mmw9" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.525008 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8fe1b3ff-4655-40d9-94cf-9d99dd1066db-config\") pod \"neutron-55656d7776-js9xr\" (UID: \"8fe1b3ff-4655-40d9-94cf-9d99dd1066db\") " pod="openstack/neutron-55656d7776-js9xr" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.525096 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a9faa39-da67-436d-884a-06d93286633e-logs\") pod \"barbican-worker-6b8cb45d75-89dpw\" (UID: \"5a9faa39-da67-436d-884a-06d93286633e\") " pod="openstack/barbican-worker-6b8cb45d75-89dpw" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.525213 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9faa39-da67-436d-884a-06d93286633e-combined-ca-bundle\") pod \"barbican-worker-6b8cb45d75-89dpw\" (UID: \"5a9faa39-da67-436d-884a-06d93286633e\") " pod="openstack/barbican-worker-6b8cb45d75-89dpw" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.525291 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26140423-1fa8-498b-b1da-487de5d0635f-logs\") pod \"barbican-api-7b98b8dd66-xfv7n\" (UID: \"26140423-1fa8-498b-b1da-487de5d0635f\") " pod="openstack/barbican-api-7b98b8dd66-xfv7n" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.525372 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d2effe7-e7d4-417b-8446-6466eda6c94c-config\") pod \"dnsmasq-dns-75c8ddd69c-x4qc2\" (UID: \"1d2effe7-e7d4-417b-8446-6466eda6c94c\") " pod="openstack/dnsmasq-dns-75c8ddd69c-x4qc2" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.525437 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26140423-1fa8-498b-b1da-487de5d0635f-config-data\") pod \"barbican-api-7b98b8dd66-xfv7n\" (UID: \"26140423-1fa8-498b-b1da-487de5d0635f\") " pod="openstack/barbican-api-7b98b8dd66-xfv7n" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.525509 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a9faa39-da67-436d-884a-06d93286633e-config-data\") pod \"barbican-worker-6b8cb45d75-89dpw\" (UID: \"5a9faa39-da67-436d-884a-06d93286633e\") " pod="openstack/barbican-worker-6b8cb45d75-89dpw" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.525649 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a9faa39-da67-436d-884a-06d93286633e-config-data-custom\") pod \"barbican-worker-6b8cb45d75-89dpw\" (UID: \"5a9faa39-da67-436d-884a-06d93286633e\") " pod="openstack/barbican-worker-6b8cb45d75-89dpw" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.525720 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg8lv\" (UniqueName: \"kubernetes.io/projected/8fe1b3ff-4655-40d9-94cf-9d99dd1066db-kube-api-access-cg8lv\") pod \"neutron-55656d7776-js9xr\" (UID: \"8fe1b3ff-4655-40d9-94cf-9d99dd1066db\") " pod="openstack/neutron-55656d7776-js9xr" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.525785 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mqgq\" (UniqueName: \"kubernetes.io/projected/1d2effe7-e7d4-417b-8446-6466eda6c94c-kube-api-access-6mqgq\") pod \"dnsmasq-dns-75c8ddd69c-x4qc2\" (UID: \"1d2effe7-e7d4-417b-8446-6466eda6c94c\") " pod="openstack/dnsmasq-dns-75c8ddd69c-x4qc2" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.525848 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ecdef57-40ee-46b4-a739-3f8fd2354018-config-data\") pod \"barbican-keystone-listener-7869fbbf6d-5mmw9\" (UID: \"0ecdef57-40ee-46b4-a739-3f8fd2354018\") " pod="openstack/barbican-keystone-listener-7869fbbf6d-5mmw9" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.525909 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ecdef57-40ee-46b4-a739-3f8fd2354018-logs\") pod \"barbican-keystone-listener-7869fbbf6d-5mmw9\" (UID: \"0ecdef57-40ee-46b4-a739-3f8fd2354018\") " pod="openstack/barbican-keystone-listener-7869fbbf6d-5mmw9" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.526032 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ecdef57-40ee-46b4-a739-3f8fd2354018-config-data-custom\") pod \"barbican-keystone-listener-7869fbbf6d-5mmw9\" (UID: \"0ecdef57-40ee-46b4-a739-3f8fd2354018\") " pod="openstack/barbican-keystone-listener-7869fbbf6d-5mmw9" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.526104 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ecdef57-40ee-46b4-a739-3f8fd2354018-combined-ca-bundle\") pod \"barbican-keystone-listener-7869fbbf6d-5mmw9\" (UID: \"0ecdef57-40ee-46b4-a739-3f8fd2354018\") " pod="openstack/barbican-keystone-listener-7869fbbf6d-5mmw9" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.526190 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqk94\" (UniqueName: \"kubernetes.io/projected/26140423-1fa8-498b-b1da-487de5d0635f-kube-api-access-bqk94\") pod \"barbican-api-7b98b8dd66-xfv7n\" (UID: \"26140423-1fa8-498b-b1da-487de5d0635f\") " pod="openstack/barbican-api-7b98b8dd66-xfv7n" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.526299 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d2effe7-e7d4-417b-8446-6466eda6c94c-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-x4qc2\" (UID: \"1d2effe7-e7d4-417b-8446-6466eda6c94c\") " pod="openstack/dnsmasq-dns-75c8ddd69c-x4qc2" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.526395 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxx8q\" (UniqueName: \"kubernetes.io/projected/5a9faa39-da67-436d-884a-06d93286633e-kube-api-access-vxx8q\") pod \"barbican-worker-6b8cb45d75-89dpw\" (UID: \"5a9faa39-da67-436d-884a-06d93286633e\") " pod="openstack/barbican-worker-6b8cb45d75-89dpw" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.526540 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8fe1b3ff-4655-40d9-94cf-9d99dd1066db-httpd-config\") pod \"neutron-55656d7776-js9xr\" (UID: \"8fe1b3ff-4655-40d9-94cf-9d99dd1066db\") " pod="openstack/neutron-55656d7776-js9xr" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.526630 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d2effe7-e7d4-417b-8446-6466eda6c94c-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-x4qc2\" (UID: \"1d2effe7-e7d4-417b-8446-6466eda6c94c\") " pod="openstack/dnsmasq-dns-75c8ddd69c-x4qc2" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.528101 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a9faa39-da67-436d-884a-06d93286633e-logs\") pod \"barbican-worker-6b8cb45d75-89dpw\" (UID: \"5a9faa39-da67-436d-884a-06d93286633e\") " pod="openstack/barbican-worker-6b8cb45d75-89dpw" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.532199 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fe1b3ff-4655-40d9-94cf-9d99dd1066db-ovndb-tls-certs\") pod \"neutron-55656d7776-js9xr\" (UID: \"8fe1b3ff-4655-40d9-94cf-9d99dd1066db\") " pod="openstack/neutron-55656d7776-js9xr" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.533182 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8fe1b3ff-4655-40d9-94cf-9d99dd1066db-config\") pod \"neutron-55656d7776-js9xr\" (UID: \"8fe1b3ff-4655-40d9-94cf-9d99dd1066db\") " pod="openstack/neutron-55656d7776-js9xr" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.533618 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a9faa39-da67-436d-884a-06d93286633e-config-data-custom\") pod \"barbican-worker-6b8cb45d75-89dpw\" (UID: \"5a9faa39-da67-436d-884a-06d93286633e\") " pod="openstack/barbican-worker-6b8cb45d75-89dpw" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.534114 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9faa39-da67-436d-884a-06d93286633e-combined-ca-bundle\") pod \"barbican-worker-6b8cb45d75-89dpw\" (UID: \"5a9faa39-da67-436d-884a-06d93286633e\") " pod="openstack/barbican-worker-6b8cb45d75-89dpw" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.551262 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fe1b3ff-4655-40d9-94cf-9d99dd1066db-combined-ca-bundle\") pod \"neutron-55656d7776-js9xr\" (UID: \"8fe1b3ff-4655-40d9-94cf-9d99dd1066db\") " pod="openstack/neutron-55656d7776-js9xr" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.552047 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8fe1b3ff-4655-40d9-94cf-9d99dd1066db-httpd-config\") pod \"neutron-55656d7776-js9xr\" (UID: \"8fe1b3ff-4655-40d9-94cf-9d99dd1066db\") " pod="openstack/neutron-55656d7776-js9xr" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.554330 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg8lv\" (UniqueName: \"kubernetes.io/projected/8fe1b3ff-4655-40d9-94cf-9d99dd1066db-kube-api-access-cg8lv\") pod \"neutron-55656d7776-js9xr\" (UID: \"8fe1b3ff-4655-40d9-94cf-9d99dd1066db\") " pod="openstack/neutron-55656d7776-js9xr" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.557145 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a9faa39-da67-436d-884a-06d93286633e-config-data\") pod \"barbican-worker-6b8cb45d75-89dpw\" (UID: \"5a9faa39-da67-436d-884a-06d93286633e\") " pod="openstack/barbican-worker-6b8cb45d75-89dpw" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.557534 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxx8q\" (UniqueName: \"kubernetes.io/projected/5a9faa39-da67-436d-884a-06d93286633e-kube-api-access-vxx8q\") pod \"barbican-worker-6b8cb45d75-89dpw\" (UID: \"5a9faa39-da67-436d-884a-06d93286633e\") " pod="openstack/barbican-worker-6b8cb45d75-89dpw" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.632806 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26140423-1fa8-498b-b1da-487de5d0635f-logs\") pod \"barbican-api-7b98b8dd66-xfv7n\" (UID: \"26140423-1fa8-498b-b1da-487de5d0635f\") " pod="openstack/barbican-api-7b98b8dd66-xfv7n" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.633207 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d2effe7-e7d4-417b-8446-6466eda6c94c-config\") pod \"dnsmasq-dns-75c8ddd69c-x4qc2\" (UID: \"1d2effe7-e7d4-417b-8446-6466eda6c94c\") " pod="openstack/dnsmasq-dns-75c8ddd69c-x4qc2" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.633501 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26140423-1fa8-498b-b1da-487de5d0635f-config-data\") pod \"barbican-api-7b98b8dd66-xfv7n\" (UID: \"26140423-1fa8-498b-b1da-487de5d0635f\") " pod="openstack/barbican-api-7b98b8dd66-xfv7n" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.633635 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mqgq\" (UniqueName: \"kubernetes.io/projected/1d2effe7-e7d4-417b-8446-6466eda6c94c-kube-api-access-6mqgq\") pod \"dnsmasq-dns-75c8ddd69c-x4qc2\" (UID: \"1d2effe7-e7d4-417b-8446-6466eda6c94c\") " pod="openstack/dnsmasq-dns-75c8ddd69c-x4qc2" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.633734 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ecdef57-40ee-46b4-a739-3f8fd2354018-config-data\") pod \"barbican-keystone-listener-7869fbbf6d-5mmw9\" (UID: \"0ecdef57-40ee-46b4-a739-3f8fd2354018\") " pod="openstack/barbican-keystone-listener-7869fbbf6d-5mmw9" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.633843 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ecdef57-40ee-46b4-a739-3f8fd2354018-logs\") pod \"barbican-keystone-listener-7869fbbf6d-5mmw9\" (UID: \"0ecdef57-40ee-46b4-a739-3f8fd2354018\") " pod="openstack/barbican-keystone-listener-7869fbbf6d-5mmw9" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.633954 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ecdef57-40ee-46b4-a739-3f8fd2354018-config-data-custom\") pod \"barbican-keystone-listener-7869fbbf6d-5mmw9\" (UID: \"0ecdef57-40ee-46b4-a739-3f8fd2354018\") " pod="openstack/barbican-keystone-listener-7869fbbf6d-5mmw9" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.634412 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ecdef57-40ee-46b4-a739-3f8fd2354018-combined-ca-bundle\") pod \"barbican-keystone-listener-7869fbbf6d-5mmw9\" (UID: \"0ecdef57-40ee-46b4-a739-3f8fd2354018\") " pod="openstack/barbican-keystone-listener-7869fbbf6d-5mmw9" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.634511 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqk94\" (UniqueName: \"kubernetes.io/projected/26140423-1fa8-498b-b1da-487de5d0635f-kube-api-access-bqk94\") pod \"barbican-api-7b98b8dd66-xfv7n\" (UID: \"26140423-1fa8-498b-b1da-487de5d0635f\") " pod="openstack/barbican-api-7b98b8dd66-xfv7n" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.634610 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d2effe7-e7d4-417b-8446-6466eda6c94c-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-x4qc2\" (UID: \"1d2effe7-e7d4-417b-8446-6466eda6c94c\") " pod="openstack/dnsmasq-dns-75c8ddd69c-x4qc2" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.634791 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d2effe7-e7d4-417b-8446-6466eda6c94c-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-x4qc2\" (UID: \"1d2effe7-e7d4-417b-8446-6466eda6c94c\") " pod="openstack/dnsmasq-dns-75c8ddd69c-x4qc2" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.634904 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26140423-1fa8-498b-b1da-487de5d0635f-logs\") pod \"barbican-api-7b98b8dd66-xfv7n\" (UID: \"26140423-1fa8-498b-b1da-487de5d0635f\") " pod="openstack/barbican-api-7b98b8dd66-xfv7n" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.634905 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26140423-1fa8-498b-b1da-487de5d0635f-combined-ca-bundle\") pod \"barbican-api-7b98b8dd66-xfv7n\" (UID: \"26140423-1fa8-498b-b1da-487de5d0635f\") " pod="openstack/barbican-api-7b98b8dd66-xfv7n" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.634025 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d2effe7-e7d4-417b-8446-6466eda6c94c-config\") pod \"dnsmasq-dns-75c8ddd69c-x4qc2\" (UID: \"1d2effe7-e7d4-417b-8446-6466eda6c94c\") " pod="openstack/dnsmasq-dns-75c8ddd69c-x4qc2" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.635201 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d2effe7-e7d4-417b-8446-6466eda6c94c-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-x4qc2\" (UID: \"1d2effe7-e7d4-417b-8446-6466eda6c94c\") " pod="openstack/dnsmasq-dns-75c8ddd69c-x4qc2" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.635313 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26140423-1fa8-498b-b1da-487de5d0635f-config-data-custom\") pod \"barbican-api-7b98b8dd66-xfv7n\" (UID: \"26140423-1fa8-498b-b1da-487de5d0635f\") " pod="openstack/barbican-api-7b98b8dd66-xfv7n" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.635425 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgf7m\" (UniqueName: \"kubernetes.io/projected/0ecdef57-40ee-46b4-a739-3f8fd2354018-kube-api-access-sgf7m\") pod \"barbican-keystone-listener-7869fbbf6d-5mmw9\" (UID: \"0ecdef57-40ee-46b4-a739-3f8fd2354018\") " pod="openstack/barbican-keystone-listener-7869fbbf6d-5mmw9" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.635512 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ecdef57-40ee-46b4-a739-3f8fd2354018-logs\") pod \"barbican-keystone-listener-7869fbbf6d-5mmw9\" (UID: \"0ecdef57-40ee-46b4-a739-3f8fd2354018\") " pod="openstack/barbican-keystone-listener-7869fbbf6d-5mmw9" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.635519 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d2effe7-e7d4-417b-8446-6466eda6c94c-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-x4qc2\" (UID: \"1d2effe7-e7d4-417b-8446-6466eda6c94c\") " pod="openstack/dnsmasq-dns-75c8ddd69c-x4qc2" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.636572 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d2effe7-e7d4-417b-8446-6466eda6c94c-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-x4qc2\" (UID: \"1d2effe7-e7d4-417b-8446-6466eda6c94c\") " pod="openstack/dnsmasq-dns-75c8ddd69c-x4qc2" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.637289 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d2effe7-e7d4-417b-8446-6466eda6c94c-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-x4qc2\" (UID: \"1d2effe7-e7d4-417b-8446-6466eda6c94c\") " pod="openstack/dnsmasq-dns-75c8ddd69c-x4qc2" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.637679 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d2effe7-e7d4-417b-8446-6466eda6c94c-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-x4qc2\" (UID: \"1d2effe7-e7d4-417b-8446-6466eda6c94c\") " pod="openstack/dnsmasq-dns-75c8ddd69c-x4qc2" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.639271 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d2effe7-e7d4-417b-8446-6466eda6c94c-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-x4qc2\" (UID: \"1d2effe7-e7d4-417b-8446-6466eda6c94c\") " pod="openstack/dnsmasq-dns-75c8ddd69c-x4qc2" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.639886 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26140423-1fa8-498b-b1da-487de5d0635f-combined-ca-bundle\") pod \"barbican-api-7b98b8dd66-xfv7n\" (UID: \"26140423-1fa8-498b-b1da-487de5d0635f\") " pod="openstack/barbican-api-7b98b8dd66-xfv7n" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.640073 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ecdef57-40ee-46b4-a739-3f8fd2354018-config-data\") pod \"barbican-keystone-listener-7869fbbf6d-5mmw9\" (UID: \"0ecdef57-40ee-46b4-a739-3f8fd2354018\") " pod="openstack/barbican-keystone-listener-7869fbbf6d-5mmw9" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.640592 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ecdef57-40ee-46b4-a739-3f8fd2354018-combined-ca-bundle\") pod \"barbican-keystone-listener-7869fbbf6d-5mmw9\" (UID: \"0ecdef57-40ee-46b4-a739-3f8fd2354018\") " pod="openstack/barbican-keystone-listener-7869fbbf6d-5mmw9" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.640883 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ecdef57-40ee-46b4-a739-3f8fd2354018-config-data-custom\") pod \"barbican-keystone-listener-7869fbbf6d-5mmw9\" (UID: \"0ecdef57-40ee-46b4-a739-3f8fd2354018\") " pod="openstack/barbican-keystone-listener-7869fbbf6d-5mmw9" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.651565 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26140423-1fa8-498b-b1da-487de5d0635f-config-data-custom\") pod \"barbican-api-7b98b8dd66-xfv7n\" (UID: \"26140423-1fa8-498b-b1da-487de5d0635f\") " pod="openstack/barbican-api-7b98b8dd66-xfv7n" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.653433 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26140423-1fa8-498b-b1da-487de5d0635f-config-data\") pod \"barbican-api-7b98b8dd66-xfv7n\" (UID: \"26140423-1fa8-498b-b1da-487de5d0635f\") " pod="openstack/barbican-api-7b98b8dd66-xfv7n" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.653466 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgf7m\" (UniqueName: \"kubernetes.io/projected/0ecdef57-40ee-46b4-a739-3f8fd2354018-kube-api-access-sgf7m\") pod \"barbican-keystone-listener-7869fbbf6d-5mmw9\" (UID: \"0ecdef57-40ee-46b4-a739-3f8fd2354018\") " pod="openstack/barbican-keystone-listener-7869fbbf6d-5mmw9" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.657188 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqk94\" (UniqueName: \"kubernetes.io/projected/26140423-1fa8-498b-b1da-487de5d0635f-kube-api-access-bqk94\") pod \"barbican-api-7b98b8dd66-xfv7n\" (UID: \"26140423-1fa8-498b-b1da-487de5d0635f\") " pod="openstack/barbican-api-7b98b8dd66-xfv7n" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.662905 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mqgq\" (UniqueName: \"kubernetes.io/projected/1d2effe7-e7d4-417b-8446-6466eda6c94c-kube-api-access-6mqgq\") pod \"dnsmasq-dns-75c8ddd69c-x4qc2\" (UID: \"1d2effe7-e7d4-417b-8446-6466eda6c94c\") " pod="openstack/dnsmasq-dns-75c8ddd69c-x4qc2" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.664462 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6b8cb45d75-89dpw" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.682676 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55656d7776-js9xr" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.701367 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7869fbbf6d-5mmw9" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.808626 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-x4qc2" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.815562 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b98b8dd66-xfv7n" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.818402 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-vbprm" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.818541 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b85ffb7d4-zq54p" event={"ID":"6708e3ec-a080-4f8d-a9c5-2821ea678717","Type":"ContainerStarted","Data":"97712029765faab9c00bd930bacba2572e84e43c8486703a0558c9d560320c8f"} Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.818587 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.818601 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b85ffb7d4-zq54p" event={"ID":"6708e3ec-a080-4f8d-a9c5-2821ea678717","Type":"ContainerStarted","Data":"cfc75ac33bc81b6e8f2dadc28f9597becf4192c3731813cc6b8d2c01319b893e"} Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.819551 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.819565 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-b85ffb7d4-zq54p" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.819575 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-b85ffb7d4-zq54p" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.838124 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-vbprm" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.862698 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-b85ffb7d4-zq54p" podStartSLOduration=2.862678047 podStartE2EDuration="2.862678047s" podCreationTimestamp="2025-12-10 12:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:35:27.856376671 +0000 UTC m=+1195.644457809" watchObservedRunningTime="2025-12-10 12:35:27.862678047 +0000 UTC m=+1195.650759185" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.940547 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbp2z\" (UniqueName: \"kubernetes.io/projected/476c7a2b-2994-4383-b811-5c1bd0e7999e-kube-api-access-dbp2z\") pod \"476c7a2b-2994-4383-b811-5c1bd0e7999e\" (UID: \"476c7a2b-2994-4383-b811-5c1bd0e7999e\") " Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.940602 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/476c7a2b-2994-4383-b811-5c1bd0e7999e-config\") pod \"476c7a2b-2994-4383-b811-5c1bd0e7999e\" (UID: \"476c7a2b-2994-4383-b811-5c1bd0e7999e\") " Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.940625 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/476c7a2b-2994-4383-b811-5c1bd0e7999e-ovsdbserver-sb\") pod \"476c7a2b-2994-4383-b811-5c1bd0e7999e\" (UID: \"476c7a2b-2994-4383-b811-5c1bd0e7999e\") " Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.940646 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/476c7a2b-2994-4383-b811-5c1bd0e7999e-ovsdbserver-nb\") pod \"476c7a2b-2994-4383-b811-5c1bd0e7999e\" (UID: \"476c7a2b-2994-4383-b811-5c1bd0e7999e\") " Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.940707 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/476c7a2b-2994-4383-b811-5c1bd0e7999e-dns-svc\") pod \"476c7a2b-2994-4383-b811-5c1bd0e7999e\" (UID: \"476c7a2b-2994-4383-b811-5c1bd0e7999e\") " Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.940745 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/476c7a2b-2994-4383-b811-5c1bd0e7999e-dns-swift-storage-0\") pod \"476c7a2b-2994-4383-b811-5c1bd0e7999e\" (UID: \"476c7a2b-2994-4383-b811-5c1bd0e7999e\") " Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.943629 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/476c7a2b-2994-4383-b811-5c1bd0e7999e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "476c7a2b-2994-4383-b811-5c1bd0e7999e" (UID: "476c7a2b-2994-4383-b811-5c1bd0e7999e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.943961 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/476c7a2b-2994-4383-b811-5c1bd0e7999e-config" (OuterVolumeSpecName: "config") pod "476c7a2b-2994-4383-b811-5c1bd0e7999e" (UID: "476c7a2b-2994-4383-b811-5c1bd0e7999e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.945091 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/476c7a2b-2994-4383-b811-5c1bd0e7999e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "476c7a2b-2994-4383-b811-5c1bd0e7999e" (UID: "476c7a2b-2994-4383-b811-5c1bd0e7999e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.947256 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/476c7a2b-2994-4383-b811-5c1bd0e7999e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "476c7a2b-2994-4383-b811-5c1bd0e7999e" (UID: "476c7a2b-2994-4383-b811-5c1bd0e7999e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.947331 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/476c7a2b-2994-4383-b811-5c1bd0e7999e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "476c7a2b-2994-4383-b811-5c1bd0e7999e" (UID: "476c7a2b-2994-4383-b811-5c1bd0e7999e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:35:27 crc kubenswrapper[4689]: I1210 12:35:27.947459 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/476c7a2b-2994-4383-b811-5c1bd0e7999e-kube-api-access-dbp2z" (OuterVolumeSpecName: "kube-api-access-dbp2z") pod "476c7a2b-2994-4383-b811-5c1bd0e7999e" (UID: "476c7a2b-2994-4383-b811-5c1bd0e7999e"). InnerVolumeSpecName "kube-api-access-dbp2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:35:28 crc kubenswrapper[4689]: I1210 12:35:28.042557 4689 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/476c7a2b-2994-4383-b811-5c1bd0e7999e-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:28 crc kubenswrapper[4689]: I1210 12:35:28.042582 4689 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/476c7a2b-2994-4383-b811-5c1bd0e7999e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:28 crc kubenswrapper[4689]: I1210 12:35:28.042593 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbp2z\" (UniqueName: \"kubernetes.io/projected/476c7a2b-2994-4383-b811-5c1bd0e7999e-kube-api-access-dbp2z\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:28 crc kubenswrapper[4689]: I1210 12:35:28.042602 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/476c7a2b-2994-4383-b811-5c1bd0e7999e-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:28 crc kubenswrapper[4689]: I1210 12:35:28.042610 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/476c7a2b-2994-4383-b811-5c1bd0e7999e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:28 crc kubenswrapper[4689]: I1210 12:35:28.042618 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/476c7a2b-2994-4383-b811-5c1bd0e7999e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:28 crc kubenswrapper[4689]: I1210 12:35:28.180984 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6b8cb45d75-89dpw"] Dec 10 12:35:28 crc kubenswrapper[4689]: I1210 12:35:28.301230 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7869fbbf6d-5mmw9"] Dec 10 12:35:28 crc kubenswrapper[4689]: I1210 12:35:28.437999 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7b98b8dd66-xfv7n"] Dec 10 12:35:28 crc kubenswrapper[4689]: W1210 12:35:28.448288 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26140423_1fa8_498b_b1da_487de5d0635f.slice/crio-591de0c31817a291c74301bbdb13c5f35e5e789484dc752a97464434ad5080e0 WatchSource:0}: Error finding container 591de0c31817a291c74301bbdb13c5f35e5e789484dc752a97464434ad5080e0: Status 404 returned error can't find the container with id 591de0c31817a291c74301bbdb13c5f35e5e789484dc752a97464434ad5080e0 Dec 10 12:35:28 crc kubenswrapper[4689]: I1210 12:35:28.548176 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-x4qc2"] Dec 10 12:35:28 crc kubenswrapper[4689]: W1210 12:35:28.553882 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d2effe7_e7d4_417b_8446_6466eda6c94c.slice/crio-4cc211f596daa380c75ba05dabe2184b6824a842e7899ec0a75141f9f88de2e4 WatchSource:0}: Error finding container 4cc211f596daa380c75ba05dabe2184b6824a842e7899ec0a75141f9f88de2e4: Status 404 returned error can't find the container with id 4cc211f596daa380c75ba05dabe2184b6824a842e7899ec0a75141f9f88de2e4 Dec 10 12:35:28 crc kubenswrapper[4689]: I1210 12:35:28.678215 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-55656d7776-js9xr"] Dec 10 12:35:28 crc kubenswrapper[4689]: W1210 12:35:28.684433 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fe1b3ff_4655_40d9_94cf_9d99dd1066db.slice/crio-80756df373d3fef3115f7bdd5be484651668d3837702f8d8782067d49fe78d86 WatchSource:0}: Error finding container 80756df373d3fef3115f7bdd5be484651668d3837702f8d8782067d49fe78d86: Status 404 returned error can't find the container with id 80756df373d3fef3115f7bdd5be484651668d3837702f8d8782067d49fe78d86 Dec 10 12:35:28 crc kubenswrapper[4689]: I1210 12:35:28.825554 4689 generic.go:334] "Generic (PLEG): container finished" podID="1d2effe7-e7d4-417b-8446-6466eda6c94c" containerID="eaecad246f102656b9ed24adfcd9348d2a81524c29339a6784e65b905fb4801e" exitCode=0 Dec 10 12:35:28 crc kubenswrapper[4689]: I1210 12:35:28.825616 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-x4qc2" event={"ID":"1d2effe7-e7d4-417b-8446-6466eda6c94c","Type":"ContainerDied","Data":"eaecad246f102656b9ed24adfcd9348d2a81524c29339a6784e65b905fb4801e"} Dec 10 12:35:28 crc kubenswrapper[4689]: I1210 12:35:28.825646 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-x4qc2" event={"ID":"1d2effe7-e7d4-417b-8446-6466eda6c94c","Type":"ContainerStarted","Data":"4cc211f596daa380c75ba05dabe2184b6824a842e7899ec0a75141f9f88de2e4"} Dec 10 12:35:28 crc kubenswrapper[4689]: I1210 12:35:28.828246 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55656d7776-js9xr" event={"ID":"8fe1b3ff-4655-40d9-94cf-9d99dd1066db","Type":"ContainerStarted","Data":"80756df373d3fef3115f7bdd5be484651668d3837702f8d8782067d49fe78d86"} Dec 10 12:35:28 crc kubenswrapper[4689]: I1210 12:35:28.834505 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b98b8dd66-xfv7n" event={"ID":"26140423-1fa8-498b-b1da-487de5d0635f","Type":"ContainerStarted","Data":"92bfe216c34d0444743b1a4d4ac2c63fc1c79aee442fc3fb497aac04e0c193f6"} Dec 10 12:35:28 crc kubenswrapper[4689]: I1210 12:35:28.834557 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b98b8dd66-xfv7n" event={"ID":"26140423-1fa8-498b-b1da-487de5d0635f","Type":"ContainerStarted","Data":"591de0c31817a291c74301bbdb13c5f35e5e789484dc752a97464434ad5080e0"} Dec 10 12:35:28 crc kubenswrapper[4689]: I1210 12:35:28.836918 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6b8cb45d75-89dpw" event={"ID":"5a9faa39-da67-436d-884a-06d93286633e","Type":"ContainerStarted","Data":"44ec69cf3a0d618662bdd3fb7fdd260d8eae068675b2701c8a51097b44f74c7d"} Dec 10 12:35:28 crc kubenswrapper[4689]: I1210 12:35:28.846925 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-vbprm" Dec 10 12:35:28 crc kubenswrapper[4689]: I1210 12:35:28.847276 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7869fbbf6d-5mmw9" event={"ID":"0ecdef57-40ee-46b4-a739-3f8fd2354018","Type":"ContainerStarted","Data":"a3e9e1a058ba0458648c0768a1a1ed489d526d177c7b9ca1369f5fa9048e9533"} Dec 10 12:35:28 crc kubenswrapper[4689]: I1210 12:35:28.899045 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 10 12:35:28 crc kubenswrapper[4689]: I1210 12:35:28.899620 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 10 12:35:28 crc kubenswrapper[4689]: I1210 12:35:28.955101 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-vbprm"] Dec 10 12:35:28 crc kubenswrapper[4689]: I1210 12:35:28.955640 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 10 12:35:28 crc kubenswrapper[4689]: I1210 12:35:28.962517 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-vbprm"] Dec 10 12:35:28 crc kubenswrapper[4689]: I1210 12:35:28.980571 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.571146 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-67dc569cfc-xmxfj"] Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.573340 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67dc569cfc-xmxfj" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.580392 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.586198 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.597483 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-67dc569cfc-xmxfj"] Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.703111 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fwf7\" (UniqueName: \"kubernetes.io/projected/84537f57-e77b-4147-99e4-d22fa43780cb-kube-api-access-8fwf7\") pod \"neutron-67dc569cfc-xmxfj\" (UID: \"84537f57-e77b-4147-99e4-d22fa43780cb\") " pod="openstack/neutron-67dc569cfc-xmxfj" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.703159 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84537f57-e77b-4147-99e4-d22fa43780cb-combined-ca-bundle\") pod \"neutron-67dc569cfc-xmxfj\" (UID: \"84537f57-e77b-4147-99e4-d22fa43780cb\") " pod="openstack/neutron-67dc569cfc-xmxfj" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.703178 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/84537f57-e77b-4147-99e4-d22fa43780cb-config\") pod \"neutron-67dc569cfc-xmxfj\" (UID: \"84537f57-e77b-4147-99e4-d22fa43780cb\") " pod="openstack/neutron-67dc569cfc-xmxfj" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.703206 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/84537f57-e77b-4147-99e4-d22fa43780cb-httpd-config\") pod \"neutron-67dc569cfc-xmxfj\" (UID: \"84537f57-e77b-4147-99e4-d22fa43780cb\") " pod="openstack/neutron-67dc569cfc-xmxfj" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.703258 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84537f57-e77b-4147-99e4-d22fa43780cb-internal-tls-certs\") pod \"neutron-67dc569cfc-xmxfj\" (UID: \"84537f57-e77b-4147-99e4-d22fa43780cb\") " pod="openstack/neutron-67dc569cfc-xmxfj" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.703286 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84537f57-e77b-4147-99e4-d22fa43780cb-public-tls-certs\") pod \"neutron-67dc569cfc-xmxfj\" (UID: \"84537f57-e77b-4147-99e4-d22fa43780cb\") " pod="openstack/neutron-67dc569cfc-xmxfj" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.703328 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/84537f57-e77b-4147-99e4-d22fa43780cb-ovndb-tls-certs\") pod \"neutron-67dc569cfc-xmxfj\" (UID: \"84537f57-e77b-4147-99e4-d22fa43780cb\") " pod="openstack/neutron-67dc569cfc-xmxfj" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.805280 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84537f57-e77b-4147-99e4-d22fa43780cb-internal-tls-certs\") pod \"neutron-67dc569cfc-xmxfj\" (UID: \"84537f57-e77b-4147-99e4-d22fa43780cb\") " pod="openstack/neutron-67dc569cfc-xmxfj" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.805348 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84537f57-e77b-4147-99e4-d22fa43780cb-public-tls-certs\") pod \"neutron-67dc569cfc-xmxfj\" (UID: \"84537f57-e77b-4147-99e4-d22fa43780cb\") " pod="openstack/neutron-67dc569cfc-xmxfj" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.805401 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/84537f57-e77b-4147-99e4-d22fa43780cb-ovndb-tls-certs\") pod \"neutron-67dc569cfc-xmxfj\" (UID: \"84537f57-e77b-4147-99e4-d22fa43780cb\") " pod="openstack/neutron-67dc569cfc-xmxfj" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.805492 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fwf7\" (UniqueName: \"kubernetes.io/projected/84537f57-e77b-4147-99e4-d22fa43780cb-kube-api-access-8fwf7\") pod \"neutron-67dc569cfc-xmxfj\" (UID: \"84537f57-e77b-4147-99e4-d22fa43780cb\") " pod="openstack/neutron-67dc569cfc-xmxfj" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.805520 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84537f57-e77b-4147-99e4-d22fa43780cb-combined-ca-bundle\") pod \"neutron-67dc569cfc-xmxfj\" (UID: \"84537f57-e77b-4147-99e4-d22fa43780cb\") " pod="openstack/neutron-67dc569cfc-xmxfj" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.805541 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/84537f57-e77b-4147-99e4-d22fa43780cb-config\") pod \"neutron-67dc569cfc-xmxfj\" (UID: \"84537f57-e77b-4147-99e4-d22fa43780cb\") " pod="openstack/neutron-67dc569cfc-xmxfj" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.805558 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/84537f57-e77b-4147-99e4-d22fa43780cb-httpd-config\") pod \"neutron-67dc569cfc-xmxfj\" (UID: \"84537f57-e77b-4147-99e4-d22fa43780cb\") " pod="openstack/neutron-67dc569cfc-xmxfj" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.811924 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84537f57-e77b-4147-99e4-d22fa43780cb-public-tls-certs\") pod \"neutron-67dc569cfc-xmxfj\" (UID: \"84537f57-e77b-4147-99e4-d22fa43780cb\") " pod="openstack/neutron-67dc569cfc-xmxfj" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.815627 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84537f57-e77b-4147-99e4-d22fa43780cb-internal-tls-certs\") pod \"neutron-67dc569cfc-xmxfj\" (UID: \"84537f57-e77b-4147-99e4-d22fa43780cb\") " pod="openstack/neutron-67dc569cfc-xmxfj" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.820690 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/84537f57-e77b-4147-99e4-d22fa43780cb-config\") pod \"neutron-67dc569cfc-xmxfj\" (UID: \"84537f57-e77b-4147-99e4-d22fa43780cb\") " pod="openstack/neutron-67dc569cfc-xmxfj" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.820728 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84537f57-e77b-4147-99e4-d22fa43780cb-combined-ca-bundle\") pod \"neutron-67dc569cfc-xmxfj\" (UID: \"84537f57-e77b-4147-99e4-d22fa43780cb\") " pod="openstack/neutron-67dc569cfc-xmxfj" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.823475 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/84537f57-e77b-4147-99e4-d22fa43780cb-httpd-config\") pod \"neutron-67dc569cfc-xmxfj\" (UID: \"84537f57-e77b-4147-99e4-d22fa43780cb\") " pod="openstack/neutron-67dc569cfc-xmxfj" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.837901 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/84537f57-e77b-4147-99e4-d22fa43780cb-ovndb-tls-certs\") pod \"neutron-67dc569cfc-xmxfj\" (UID: \"84537f57-e77b-4147-99e4-d22fa43780cb\") " pod="openstack/neutron-67dc569cfc-xmxfj" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.849849 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fwf7\" (UniqueName: \"kubernetes.io/projected/84537f57-e77b-4147-99e4-d22fa43780cb-kube-api-access-8fwf7\") pod \"neutron-67dc569cfc-xmxfj\" (UID: \"84537f57-e77b-4147-99e4-d22fa43780cb\") " pod="openstack/neutron-67dc569cfc-xmxfj" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.861352 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b98b8dd66-xfv7n" event={"ID":"26140423-1fa8-498b-b1da-487de5d0635f","Type":"ContainerStarted","Data":"2b112f11d8b5b0a0943a19cfd5aaec6931880a4b031de63c920685a7efebd132"} Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.863209 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7b98b8dd66-xfv7n" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.863282 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7b98b8dd66-xfv7n" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.866685 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-x4qc2" event={"ID":"1d2effe7-e7d4-417b-8446-6466eda6c94c","Type":"ContainerStarted","Data":"696d38016f7fdcb0e0495c2becf2de963a862979fa2db40147053961d1bdb384"} Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.866732 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75c8ddd69c-x4qc2" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.874649 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55656d7776-js9xr" event={"ID":"8fe1b3ff-4655-40d9-94cf-9d99dd1066db","Type":"ContainerStarted","Data":"c25e71722696a1c06289352693782f2edb7668385834ffd8c13cc3ba2ee912fb"} Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.874815 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55656d7776-js9xr" event={"ID":"8fe1b3ff-4655-40d9-94cf-9d99dd1066db","Type":"ContainerStarted","Data":"654f818c5ad0f2486c2a51555841b2054445f4dc23d95c09eaab6bd7d6551d5f"} Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.874875 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-55656d7776-js9xr" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.875007 4689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.875066 4689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.875667 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.875822 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.907899 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7b98b8dd66-xfv7n" podStartSLOduration=2.907878549 podStartE2EDuration="2.907878549s" podCreationTimestamp="2025-12-10 12:35:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:35:29.88413708 +0000 UTC m=+1197.672218228" watchObservedRunningTime="2025-12-10 12:35:29.907878549 +0000 UTC m=+1197.695959687" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.937981 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75c8ddd69c-x4qc2" podStartSLOduration=2.937948215 podStartE2EDuration="2.937948215s" podCreationTimestamp="2025-12-10 12:35:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:35:29.907025478 +0000 UTC m=+1197.695106616" watchObservedRunningTime="2025-12-10 12:35:29.937948215 +0000 UTC m=+1197.726029353" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.947089 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-55656d7776-js9xr" podStartSLOduration=2.947075352 podStartE2EDuration="2.947075352s" podCreationTimestamp="2025-12-10 12:35:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:35:29.925212649 +0000 UTC m=+1197.713293807" watchObservedRunningTime="2025-12-10 12:35:29.947075352 +0000 UTC m=+1197.735156490" Dec 10 12:35:29 crc kubenswrapper[4689]: I1210 12:35:29.953410 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67dc569cfc-xmxfj" Dec 10 12:35:30 crc kubenswrapper[4689]: I1210 12:35:30.402952 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 10 12:35:30 crc kubenswrapper[4689]: I1210 12:35:30.412748 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 10 12:35:30 crc kubenswrapper[4689]: I1210 12:35:30.513813 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="476c7a2b-2994-4383-b811-5c1bd0e7999e" path="/var/lib/kubelet/pods/476c7a2b-2994-4383-b811-5c1bd0e7999e/volumes" Dec 10 12:35:30 crc kubenswrapper[4689]: I1210 12:35:30.884215 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-57ffb" event={"ID":"eb62506f-d5a5-44b0-8da3-125128211e10","Type":"ContainerStarted","Data":"c7ca18573d3037aceaba7ed840f7aed25cb41aefc67fd11389af7aa62e606c8b"} Dec 10 12:35:30 crc kubenswrapper[4689]: I1210 12:35:30.906503 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-57ffb" podStartSLOduration=6.402105847 podStartE2EDuration="39.906484939s" podCreationTimestamp="2025-12-10 12:34:51 +0000 UTC" firstStartedPulling="2025-12-10 12:34:55.513882422 +0000 UTC m=+1163.301963550" lastFinishedPulling="2025-12-10 12:35:29.018261504 +0000 UTC m=+1196.806342642" observedRunningTime="2025-12-10 12:35:30.899802953 +0000 UTC m=+1198.687884091" watchObservedRunningTime="2025-12-10 12:35:30.906484939 +0000 UTC m=+1198.694566077" Dec 10 12:35:31 crc kubenswrapper[4689]: I1210 12:35:31.622293 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-67dc569cfc-xmxfj"] Dec 10 12:35:31 crc kubenswrapper[4689]: I1210 12:35:31.907341 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6b8cb45d75-89dpw" event={"ID":"5a9faa39-da67-436d-884a-06d93286633e","Type":"ContainerStarted","Data":"b001da9b7ceca585d94a8972647783f4c1a40e5eee7bd73dc949b73b8c45f8b1"} Dec 10 12:35:31 crc kubenswrapper[4689]: I1210 12:35:31.920257 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7869fbbf6d-5mmw9" event={"ID":"0ecdef57-40ee-46b4-a739-3f8fd2354018","Type":"ContainerStarted","Data":"b42f5fffaafe12d0c1587491560281fdc5e5ed2cc035fa0fb34d902659437e49"} Dec 10 12:35:31 crc kubenswrapper[4689]: I1210 12:35:31.923835 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67dc569cfc-xmxfj" event={"ID":"84537f57-e77b-4147-99e4-d22fa43780cb","Type":"ContainerStarted","Data":"677fdeb95a6721e8b689aa232412a6241bcd5de60166bc616bb084af08c9efa9"} Dec 10 12:35:32 crc kubenswrapper[4689]: I1210 12:35:32.342692 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 10 12:35:32 crc kubenswrapper[4689]: I1210 12:35:32.342825 4689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 12:35:32 crc kubenswrapper[4689]: I1210 12:35:32.545905 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 10 12:35:32 crc kubenswrapper[4689]: I1210 12:35:32.968264 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6b8cb45d75-89dpw" event={"ID":"5a9faa39-da67-436d-884a-06d93286633e","Type":"ContainerStarted","Data":"90b5a53979334e16f406e97cff8be4e2e9beb7a11765e9250db08275dc18e0f2"} Dec 10 12:35:32 crc kubenswrapper[4689]: I1210 12:35:32.983420 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7869fbbf6d-5mmw9" event={"ID":"0ecdef57-40ee-46b4-a739-3f8fd2354018","Type":"ContainerStarted","Data":"b345025df70341eb865d9008754076f65f7bd762db260e2f2db013941f36e4e3"} Dec 10 12:35:33 crc kubenswrapper[4689]: I1210 12:35:33.010030 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67dc569cfc-xmxfj" event={"ID":"84537f57-e77b-4147-99e4-d22fa43780cb","Type":"ContainerStarted","Data":"b848f40df1a89bd9f96ee5d3cf701b18ba698cdd571523bcb1ace7b8ee103c3f"} Dec 10 12:35:33 crc kubenswrapper[4689]: I1210 12:35:33.010104 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67dc569cfc-xmxfj" event={"ID":"84537f57-e77b-4147-99e4-d22fa43780cb","Type":"ContainerStarted","Data":"bb6e12051f45dcb2a3b0d27a7606ee7ab9acb517b15570a2ed9b1d958de35bd5"} Dec 10 12:35:33 crc kubenswrapper[4689]: I1210 12:35:33.026704 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6b8cb45d75-89dpw" podStartSLOduration=3.237479832 podStartE2EDuration="6.026686523s" podCreationTimestamp="2025-12-10 12:35:27 +0000 UTC" firstStartedPulling="2025-12-10 12:35:28.185152492 +0000 UTC m=+1195.973233630" lastFinishedPulling="2025-12-10 12:35:30.974359183 +0000 UTC m=+1198.762440321" observedRunningTime="2025-12-10 12:35:33.000441303 +0000 UTC m=+1200.788522451" watchObservedRunningTime="2025-12-10 12:35:33.026686523 +0000 UTC m=+1200.814767661" Dec 10 12:35:33 crc kubenswrapper[4689]: I1210 12:35:33.030453 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7869fbbf6d-5mmw9" podStartSLOduration=3.377087668 podStartE2EDuration="6.030442247s" podCreationTimestamp="2025-12-10 12:35:27 +0000 UTC" firstStartedPulling="2025-12-10 12:35:28.328789188 +0000 UTC m=+1196.116870326" lastFinishedPulling="2025-12-10 12:35:30.982143767 +0000 UTC m=+1198.770224905" observedRunningTime="2025-12-10 12:35:33.019038874 +0000 UTC m=+1200.807120012" watchObservedRunningTime="2025-12-10 12:35:33.030442247 +0000 UTC m=+1200.818523385" Dec 10 12:35:33 crc kubenswrapper[4689]: I1210 12:35:33.046490 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-67dc569cfc-xmxfj" podStartSLOduration=4.046473054 podStartE2EDuration="4.046473054s" podCreationTimestamp="2025-12-10 12:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:35:33.04142164 +0000 UTC m=+1200.829502778" watchObservedRunningTime="2025-12-10 12:35:33.046473054 +0000 UTC m=+1200.834554192" Dec 10 12:35:33 crc kubenswrapper[4689]: I1210 12:35:33.592258 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-74c576f5cb-ljzfq"] Dec 10 12:35:33 crc kubenswrapper[4689]: I1210 12:35:33.593694 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74c576f5cb-ljzfq" Dec 10 12:35:33 crc kubenswrapper[4689]: I1210 12:35:33.596862 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 10 12:35:33 crc kubenswrapper[4689]: I1210 12:35:33.597643 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 10 12:35:33 crc kubenswrapper[4689]: I1210 12:35:33.618429 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-74c576f5cb-ljzfq"] Dec 10 12:35:33 crc kubenswrapper[4689]: I1210 12:35:33.780302 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/159bb08c-9220-4d78-9b24-4b8293139a23-config-data\") pod \"barbican-api-74c576f5cb-ljzfq\" (UID: \"159bb08c-9220-4d78-9b24-4b8293139a23\") " pod="openstack/barbican-api-74c576f5cb-ljzfq" Dec 10 12:35:33 crc kubenswrapper[4689]: I1210 12:35:33.780489 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/159bb08c-9220-4d78-9b24-4b8293139a23-logs\") pod \"barbican-api-74c576f5cb-ljzfq\" (UID: \"159bb08c-9220-4d78-9b24-4b8293139a23\") " pod="openstack/barbican-api-74c576f5cb-ljzfq" Dec 10 12:35:33 crc kubenswrapper[4689]: I1210 12:35:33.780650 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/159bb08c-9220-4d78-9b24-4b8293139a23-config-data-custom\") pod \"barbican-api-74c576f5cb-ljzfq\" (UID: \"159bb08c-9220-4d78-9b24-4b8293139a23\") " pod="openstack/barbican-api-74c576f5cb-ljzfq" Dec 10 12:35:33 crc kubenswrapper[4689]: I1210 12:35:33.780809 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2hxs\" (UniqueName: \"kubernetes.io/projected/159bb08c-9220-4d78-9b24-4b8293139a23-kube-api-access-g2hxs\") pod \"barbican-api-74c576f5cb-ljzfq\" (UID: \"159bb08c-9220-4d78-9b24-4b8293139a23\") " pod="openstack/barbican-api-74c576f5cb-ljzfq" Dec 10 12:35:33 crc kubenswrapper[4689]: I1210 12:35:33.780843 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/159bb08c-9220-4d78-9b24-4b8293139a23-combined-ca-bundle\") pod \"barbican-api-74c576f5cb-ljzfq\" (UID: \"159bb08c-9220-4d78-9b24-4b8293139a23\") " pod="openstack/barbican-api-74c576f5cb-ljzfq" Dec 10 12:35:33 crc kubenswrapper[4689]: I1210 12:35:33.780934 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/159bb08c-9220-4d78-9b24-4b8293139a23-internal-tls-certs\") pod \"barbican-api-74c576f5cb-ljzfq\" (UID: \"159bb08c-9220-4d78-9b24-4b8293139a23\") " pod="openstack/barbican-api-74c576f5cb-ljzfq" Dec 10 12:35:33 crc kubenswrapper[4689]: I1210 12:35:33.781076 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/159bb08c-9220-4d78-9b24-4b8293139a23-public-tls-certs\") pod \"barbican-api-74c576f5cb-ljzfq\" (UID: \"159bb08c-9220-4d78-9b24-4b8293139a23\") " pod="openstack/barbican-api-74c576f5cb-ljzfq" Dec 10 12:35:33 crc kubenswrapper[4689]: I1210 12:35:33.882261 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/159bb08c-9220-4d78-9b24-4b8293139a23-logs\") pod \"barbican-api-74c576f5cb-ljzfq\" (UID: \"159bb08c-9220-4d78-9b24-4b8293139a23\") " pod="openstack/barbican-api-74c576f5cb-ljzfq" Dec 10 12:35:33 crc kubenswrapper[4689]: I1210 12:35:33.882600 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/159bb08c-9220-4d78-9b24-4b8293139a23-config-data-custom\") pod \"barbican-api-74c576f5cb-ljzfq\" (UID: \"159bb08c-9220-4d78-9b24-4b8293139a23\") " pod="openstack/barbican-api-74c576f5cb-ljzfq" Dec 10 12:35:33 crc kubenswrapper[4689]: I1210 12:35:33.882667 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2hxs\" (UniqueName: \"kubernetes.io/projected/159bb08c-9220-4d78-9b24-4b8293139a23-kube-api-access-g2hxs\") pod \"barbican-api-74c576f5cb-ljzfq\" (UID: \"159bb08c-9220-4d78-9b24-4b8293139a23\") " pod="openstack/barbican-api-74c576f5cb-ljzfq" Dec 10 12:35:33 crc kubenswrapper[4689]: I1210 12:35:33.882693 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/159bb08c-9220-4d78-9b24-4b8293139a23-combined-ca-bundle\") pod \"barbican-api-74c576f5cb-ljzfq\" (UID: \"159bb08c-9220-4d78-9b24-4b8293139a23\") " pod="openstack/barbican-api-74c576f5cb-ljzfq" Dec 10 12:35:33 crc kubenswrapper[4689]: I1210 12:35:33.882688 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/159bb08c-9220-4d78-9b24-4b8293139a23-logs\") pod \"barbican-api-74c576f5cb-ljzfq\" (UID: \"159bb08c-9220-4d78-9b24-4b8293139a23\") " pod="openstack/barbican-api-74c576f5cb-ljzfq" Dec 10 12:35:33 crc kubenswrapper[4689]: I1210 12:35:33.882791 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/159bb08c-9220-4d78-9b24-4b8293139a23-internal-tls-certs\") pod \"barbican-api-74c576f5cb-ljzfq\" (UID: \"159bb08c-9220-4d78-9b24-4b8293139a23\") " pod="openstack/barbican-api-74c576f5cb-ljzfq" Dec 10 12:35:33 crc kubenswrapper[4689]: I1210 12:35:33.882830 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/159bb08c-9220-4d78-9b24-4b8293139a23-public-tls-certs\") pod \"barbican-api-74c576f5cb-ljzfq\" (UID: \"159bb08c-9220-4d78-9b24-4b8293139a23\") " pod="openstack/barbican-api-74c576f5cb-ljzfq" Dec 10 12:35:33 crc kubenswrapper[4689]: I1210 12:35:33.882871 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/159bb08c-9220-4d78-9b24-4b8293139a23-config-data\") pod \"barbican-api-74c576f5cb-ljzfq\" (UID: \"159bb08c-9220-4d78-9b24-4b8293139a23\") " pod="openstack/barbican-api-74c576f5cb-ljzfq" Dec 10 12:35:33 crc kubenswrapper[4689]: I1210 12:35:33.889416 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/159bb08c-9220-4d78-9b24-4b8293139a23-combined-ca-bundle\") pod \"barbican-api-74c576f5cb-ljzfq\" (UID: \"159bb08c-9220-4d78-9b24-4b8293139a23\") " pod="openstack/barbican-api-74c576f5cb-ljzfq" Dec 10 12:35:33 crc kubenswrapper[4689]: I1210 12:35:33.889790 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/159bb08c-9220-4d78-9b24-4b8293139a23-config-data-custom\") pod \"barbican-api-74c576f5cb-ljzfq\" (UID: \"159bb08c-9220-4d78-9b24-4b8293139a23\") " pod="openstack/barbican-api-74c576f5cb-ljzfq" Dec 10 12:35:33 crc kubenswrapper[4689]: I1210 12:35:33.891229 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/159bb08c-9220-4d78-9b24-4b8293139a23-internal-tls-certs\") pod \"barbican-api-74c576f5cb-ljzfq\" (UID: \"159bb08c-9220-4d78-9b24-4b8293139a23\") " pod="openstack/barbican-api-74c576f5cb-ljzfq" Dec 10 12:35:33 crc kubenswrapper[4689]: I1210 12:35:33.909598 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/159bb08c-9220-4d78-9b24-4b8293139a23-config-data\") pod \"barbican-api-74c576f5cb-ljzfq\" (UID: \"159bb08c-9220-4d78-9b24-4b8293139a23\") " pod="openstack/barbican-api-74c576f5cb-ljzfq" Dec 10 12:35:33 crc kubenswrapper[4689]: I1210 12:35:33.910403 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/159bb08c-9220-4d78-9b24-4b8293139a23-public-tls-certs\") pod \"barbican-api-74c576f5cb-ljzfq\" (UID: \"159bb08c-9220-4d78-9b24-4b8293139a23\") " pod="openstack/barbican-api-74c576f5cb-ljzfq" Dec 10 12:35:33 crc kubenswrapper[4689]: I1210 12:35:33.920408 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2hxs\" (UniqueName: \"kubernetes.io/projected/159bb08c-9220-4d78-9b24-4b8293139a23-kube-api-access-g2hxs\") pod \"barbican-api-74c576f5cb-ljzfq\" (UID: \"159bb08c-9220-4d78-9b24-4b8293139a23\") " pod="openstack/barbican-api-74c576f5cb-ljzfq" Dec 10 12:35:33 crc kubenswrapper[4689]: I1210 12:35:33.946428 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74c576f5cb-ljzfq" Dec 10 12:35:34 crc kubenswrapper[4689]: I1210 12:35:34.022088 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-67dc569cfc-xmxfj" Dec 10 12:35:36 crc kubenswrapper[4689]: I1210 12:35:36.045744 4689 generic.go:334] "Generic (PLEG): container finished" podID="eb62506f-d5a5-44b0-8da3-125128211e10" containerID="c7ca18573d3037aceaba7ed840f7aed25cb41aefc67fd11389af7aa62e606c8b" exitCode=0 Dec 10 12:35:36 crc kubenswrapper[4689]: I1210 12:35:36.045887 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-57ffb" event={"ID":"eb62506f-d5a5-44b0-8da3-125128211e10","Type":"ContainerDied","Data":"c7ca18573d3037aceaba7ed840f7aed25cb41aefc67fd11389af7aa62e606c8b"} Dec 10 12:35:37 crc kubenswrapper[4689]: I1210 12:35:37.810591 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75c8ddd69c-x4qc2" Dec 10 12:35:37 crc kubenswrapper[4689]: I1210 12:35:37.879277 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-w7sjh"] Dec 10 12:35:37 crc kubenswrapper[4689]: I1210 12:35:37.879709 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b5c85b87-w7sjh" podUID="edfb6012-77c8-4a57-b217-19089c4a9d17" containerName="dnsmasq-dns" containerID="cri-o://7b3ca4d4490995ce3c4cdd048536970eb37033cd4e5cd5010e2ca0532d1d16ca" gracePeriod=10 Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.073138 4689 generic.go:334] "Generic (PLEG): container finished" podID="574d5244-06f3-49f9-b8b8-93bd57d4fc35" containerID="11599a88a6ff52bec2ccbea1860cf6463f5692cad44368c00d008c96f2b97990" exitCode=0 Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.073174 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-jz2g4" event={"ID":"574d5244-06f3-49f9-b8b8-93bd57d4fc35","Type":"ContainerDied","Data":"11599a88a6ff52bec2ccbea1860cf6463f5692cad44368c00d008c96f2b97990"} Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.426459 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-57ffb" Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.491129 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lt4h9\" (UniqueName: \"kubernetes.io/projected/eb62506f-d5a5-44b0-8da3-125128211e10-kube-api-access-lt4h9\") pod \"eb62506f-d5a5-44b0-8da3-125128211e10\" (UID: \"eb62506f-d5a5-44b0-8da3-125128211e10\") " Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.491200 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb62506f-d5a5-44b0-8da3-125128211e10-scripts\") pod \"eb62506f-d5a5-44b0-8da3-125128211e10\" (UID: \"eb62506f-d5a5-44b0-8da3-125128211e10\") " Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.491252 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb62506f-d5a5-44b0-8da3-125128211e10-combined-ca-bundle\") pod \"eb62506f-d5a5-44b0-8da3-125128211e10\" (UID: \"eb62506f-d5a5-44b0-8da3-125128211e10\") " Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.491323 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb62506f-d5a5-44b0-8da3-125128211e10-config-data\") pod \"eb62506f-d5a5-44b0-8da3-125128211e10\" (UID: \"eb62506f-d5a5-44b0-8da3-125128211e10\") " Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.491343 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb62506f-d5a5-44b0-8da3-125128211e10-db-sync-config-data\") pod \"eb62506f-d5a5-44b0-8da3-125128211e10\" (UID: \"eb62506f-d5a5-44b0-8da3-125128211e10\") " Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.491358 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb62506f-d5a5-44b0-8da3-125128211e10-etc-machine-id\") pod \"eb62506f-d5a5-44b0-8da3-125128211e10\" (UID: \"eb62506f-d5a5-44b0-8da3-125128211e10\") " Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.491730 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb62506f-d5a5-44b0-8da3-125128211e10-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "eb62506f-d5a5-44b0-8da3-125128211e10" (UID: "eb62506f-d5a5-44b0-8da3-125128211e10"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.498028 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb62506f-d5a5-44b0-8da3-125128211e10-kube-api-access-lt4h9" (OuterVolumeSpecName: "kube-api-access-lt4h9") pod "eb62506f-d5a5-44b0-8da3-125128211e10" (UID: "eb62506f-d5a5-44b0-8da3-125128211e10"). InnerVolumeSpecName "kube-api-access-lt4h9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.504630 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb62506f-d5a5-44b0-8da3-125128211e10-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "eb62506f-d5a5-44b0-8da3-125128211e10" (UID: "eb62506f-d5a5-44b0-8da3-125128211e10"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.512470 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb62506f-d5a5-44b0-8da3-125128211e10-scripts" (OuterVolumeSpecName: "scripts") pod "eb62506f-d5a5-44b0-8da3-125128211e10" (UID: "eb62506f-d5a5-44b0-8da3-125128211e10"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.536112 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb62506f-d5a5-44b0-8da3-125128211e10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb62506f-d5a5-44b0-8da3-125128211e10" (UID: "eb62506f-d5a5-44b0-8da3-125128211e10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.562560 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb62506f-d5a5-44b0-8da3-125128211e10-config-data" (OuterVolumeSpecName: "config-data") pod "eb62506f-d5a5-44b0-8da3-125128211e10" (UID: "eb62506f-d5a5-44b0-8da3-125128211e10"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.596661 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lt4h9\" (UniqueName: \"kubernetes.io/projected/eb62506f-d5a5-44b0-8da3-125128211e10-kube-api-access-lt4h9\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.596988 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb62506f-d5a5-44b0-8da3-125128211e10-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.596998 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb62506f-d5a5-44b0-8da3-125128211e10-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.597008 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb62506f-d5a5-44b0-8da3-125128211e10-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.597020 4689 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb62506f-d5a5-44b0-8da3-125128211e10-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.597031 4689 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb62506f-d5a5-44b0-8da3-125128211e10-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.620861 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-w7sjh" Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.698357 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smfpg\" (UniqueName: \"kubernetes.io/projected/edfb6012-77c8-4a57-b217-19089c4a9d17-kube-api-access-smfpg\") pod \"edfb6012-77c8-4a57-b217-19089c4a9d17\" (UID: \"edfb6012-77c8-4a57-b217-19089c4a9d17\") " Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.698407 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edfb6012-77c8-4a57-b217-19089c4a9d17-config\") pod \"edfb6012-77c8-4a57-b217-19089c4a9d17\" (UID: \"edfb6012-77c8-4a57-b217-19089c4a9d17\") " Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.698435 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/edfb6012-77c8-4a57-b217-19089c4a9d17-dns-svc\") pod \"edfb6012-77c8-4a57-b217-19089c4a9d17\" (UID: \"edfb6012-77c8-4a57-b217-19089c4a9d17\") " Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.698510 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/edfb6012-77c8-4a57-b217-19089c4a9d17-dns-swift-storage-0\") pod \"edfb6012-77c8-4a57-b217-19089c4a9d17\" (UID: \"edfb6012-77c8-4a57-b217-19089c4a9d17\") " Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.698541 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/edfb6012-77c8-4a57-b217-19089c4a9d17-ovsdbserver-nb\") pod \"edfb6012-77c8-4a57-b217-19089c4a9d17\" (UID: \"edfb6012-77c8-4a57-b217-19089c4a9d17\") " Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.698595 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/edfb6012-77c8-4a57-b217-19089c4a9d17-ovsdbserver-sb\") pod \"edfb6012-77c8-4a57-b217-19089c4a9d17\" (UID: \"edfb6012-77c8-4a57-b217-19089c4a9d17\") " Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.703791 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edfb6012-77c8-4a57-b217-19089c4a9d17-kube-api-access-smfpg" (OuterVolumeSpecName: "kube-api-access-smfpg") pod "edfb6012-77c8-4a57-b217-19089c4a9d17" (UID: "edfb6012-77c8-4a57-b217-19089c4a9d17"). InnerVolumeSpecName "kube-api-access-smfpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.748517 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edfb6012-77c8-4a57-b217-19089c4a9d17-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "edfb6012-77c8-4a57-b217-19089c4a9d17" (UID: "edfb6012-77c8-4a57-b217-19089c4a9d17"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.759296 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edfb6012-77c8-4a57-b217-19089c4a9d17-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "edfb6012-77c8-4a57-b217-19089c4a9d17" (UID: "edfb6012-77c8-4a57-b217-19089c4a9d17"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.760981 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edfb6012-77c8-4a57-b217-19089c4a9d17-config" (OuterVolumeSpecName: "config") pod "edfb6012-77c8-4a57-b217-19089c4a9d17" (UID: "edfb6012-77c8-4a57-b217-19089c4a9d17"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.775720 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edfb6012-77c8-4a57-b217-19089c4a9d17-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "edfb6012-77c8-4a57-b217-19089c4a9d17" (UID: "edfb6012-77c8-4a57-b217-19089c4a9d17"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.776232 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edfb6012-77c8-4a57-b217-19089c4a9d17-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "edfb6012-77c8-4a57-b217-19089c4a9d17" (UID: "edfb6012-77c8-4a57-b217-19089c4a9d17"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.800762 4689 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/edfb6012-77c8-4a57-b217-19089c4a9d17-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.800802 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/edfb6012-77c8-4a57-b217-19089c4a9d17-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.800818 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/edfb6012-77c8-4a57-b217-19089c4a9d17-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.800830 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smfpg\" (UniqueName: \"kubernetes.io/projected/edfb6012-77c8-4a57-b217-19089c4a9d17-kube-api-access-smfpg\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.800842 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edfb6012-77c8-4a57-b217-19089c4a9d17-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.800854 4689 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/edfb6012-77c8-4a57-b217-19089c4a9d17-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:38 crc kubenswrapper[4689]: I1210 12:35:38.953939 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-74c576f5cb-ljzfq"] Dec 10 12:35:38 crc kubenswrapper[4689]: W1210 12:35:38.955272 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod159bb08c_9220_4d78_9b24_4b8293139a23.slice/crio-c641ecd531de9f76279079cda7eb0085e6444be94094792fdaf74cea69f8c944 WatchSource:0}: Error finding container c641ecd531de9f76279079cda7eb0085e6444be94094792fdaf74cea69f8c944: Status 404 returned error can't find the container with id c641ecd531de9f76279079cda7eb0085e6444be94094792fdaf74cea69f8c944 Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.089378 4689 generic.go:334] "Generic (PLEG): container finished" podID="edfb6012-77c8-4a57-b217-19089c4a9d17" containerID="7b3ca4d4490995ce3c4cdd048536970eb37033cd4e5cd5010e2ca0532d1d16ca" exitCode=0 Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.089634 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-w7sjh" event={"ID":"edfb6012-77c8-4a57-b217-19089c4a9d17","Type":"ContainerDied","Data":"7b3ca4d4490995ce3c4cdd048536970eb37033cd4e5cd5010e2ca0532d1d16ca"} Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.089785 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-w7sjh" event={"ID":"edfb6012-77c8-4a57-b217-19089c4a9d17","Type":"ContainerDied","Data":"e5103f1405f3349499dd1e259da3505ccba697733f4f2035f0fccb2eef42baed"} Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.089810 4689 scope.go:117] "RemoveContainer" containerID="7b3ca4d4490995ce3c4cdd048536970eb37033cd4e5cd5010e2ca0532d1d16ca" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.090208 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-w7sjh" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.095911 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12","Type":"ContainerStarted","Data":"000713094db390ebb45420ebab5003c479fcbd14372fd96c24a71e5573905369"} Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.112680 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74c576f5cb-ljzfq" event={"ID":"159bb08c-9220-4d78-9b24-4b8293139a23","Type":"ContainerStarted","Data":"c641ecd531de9f76279079cda7eb0085e6444be94094792fdaf74cea69f8c944"} Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.121313 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-57ffb" event={"ID":"eb62506f-d5a5-44b0-8da3-125128211e10","Type":"ContainerDied","Data":"8b096f3774721e38ee6a5876140a433187484ff3a18bc6ad775acab3fad684ef"} Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.132185 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b096f3774721e38ee6a5876140a433187484ff3a18bc6ad775acab3fad684ef" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.121347 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-57ffb" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.134737 4689 scope.go:117] "RemoveContainer" containerID="62247004df63262efca8e0474093d3f91d333703b1b40580b8bfbce76fa1efa8" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.164174 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-w7sjh"] Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.174331 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-w7sjh"] Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.299570 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7b98b8dd66-xfv7n" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.438583 4689 scope.go:117] "RemoveContainer" containerID="7b3ca4d4490995ce3c4cdd048536970eb37033cd4e5cd5010e2ca0532d1d16ca" Dec 10 12:35:39 crc kubenswrapper[4689]: E1210 12:35:39.446714 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b3ca4d4490995ce3c4cdd048536970eb37033cd4e5cd5010e2ca0532d1d16ca\": container with ID starting with 7b3ca4d4490995ce3c4cdd048536970eb37033cd4e5cd5010e2ca0532d1d16ca not found: ID does not exist" containerID="7b3ca4d4490995ce3c4cdd048536970eb37033cd4e5cd5010e2ca0532d1d16ca" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.446806 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b3ca4d4490995ce3c4cdd048536970eb37033cd4e5cd5010e2ca0532d1d16ca"} err="failed to get container status \"7b3ca4d4490995ce3c4cdd048536970eb37033cd4e5cd5010e2ca0532d1d16ca\": rpc error: code = NotFound desc = could not find container \"7b3ca4d4490995ce3c4cdd048536970eb37033cd4e5cd5010e2ca0532d1d16ca\": container with ID starting with 7b3ca4d4490995ce3c4cdd048536970eb37033cd4e5cd5010e2ca0532d1d16ca not found: ID does not exist" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.446848 4689 scope.go:117] "RemoveContainer" containerID="62247004df63262efca8e0474093d3f91d333703b1b40580b8bfbce76fa1efa8" Dec 10 12:35:39 crc kubenswrapper[4689]: E1210 12:35:39.457449 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62247004df63262efca8e0474093d3f91d333703b1b40580b8bfbce76fa1efa8\": container with ID starting with 62247004df63262efca8e0474093d3f91d333703b1b40580b8bfbce76fa1efa8 not found: ID does not exist" containerID="62247004df63262efca8e0474093d3f91d333703b1b40580b8bfbce76fa1efa8" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.457498 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62247004df63262efca8e0474093d3f91d333703b1b40580b8bfbce76fa1efa8"} err="failed to get container status \"62247004df63262efca8e0474093d3f91d333703b1b40580b8bfbce76fa1efa8\": rpc error: code = NotFound desc = could not find container \"62247004df63262efca8e0474093d3f91d333703b1b40580b8bfbce76fa1efa8\": container with ID starting with 62247004df63262efca8e0474093d3f91d333703b1b40580b8bfbce76fa1efa8 not found: ID does not exist" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.551611 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7b98b8dd66-xfv7n" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.616534 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-jz2g4" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.717790 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/574d5244-06f3-49f9-b8b8-93bd57d4fc35-config-data\") pod \"574d5244-06f3-49f9-b8b8-93bd57d4fc35\" (UID: \"574d5244-06f3-49f9-b8b8-93bd57d4fc35\") " Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.717852 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/574d5244-06f3-49f9-b8b8-93bd57d4fc35-etc-podinfo\") pod \"574d5244-06f3-49f9-b8b8-93bd57d4fc35\" (UID: \"574d5244-06f3-49f9-b8b8-93bd57d4fc35\") " Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.738825 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/574d5244-06f3-49f9-b8b8-93bd57d4fc35-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "574d5244-06f3-49f9-b8b8-93bd57d4fc35" (UID: "574d5244-06f3-49f9-b8b8-93bd57d4fc35"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.767922 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 12:35:39 crc kubenswrapper[4689]: E1210 12:35:39.768562 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edfb6012-77c8-4a57-b217-19089c4a9d17" containerName="init" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.768601 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="edfb6012-77c8-4a57-b217-19089c4a9d17" containerName="init" Dec 10 12:35:39 crc kubenswrapper[4689]: E1210 12:35:39.768613 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb62506f-d5a5-44b0-8da3-125128211e10" containerName="cinder-db-sync" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.768621 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb62506f-d5a5-44b0-8da3-125128211e10" containerName="cinder-db-sync" Dec 10 12:35:39 crc kubenswrapper[4689]: E1210 12:35:39.768644 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="574d5244-06f3-49f9-b8b8-93bd57d4fc35" containerName="ironic-db-sync" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.768671 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="574d5244-06f3-49f9-b8b8-93bd57d4fc35" containerName="ironic-db-sync" Dec 10 12:35:39 crc kubenswrapper[4689]: E1210 12:35:39.768682 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="574d5244-06f3-49f9-b8b8-93bd57d4fc35" containerName="init" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.768687 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="574d5244-06f3-49f9-b8b8-93bd57d4fc35" containerName="init" Dec 10 12:35:39 crc kubenswrapper[4689]: E1210 12:35:39.768701 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edfb6012-77c8-4a57-b217-19089c4a9d17" containerName="dnsmasq-dns" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.768708 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="edfb6012-77c8-4a57-b217-19089c4a9d17" containerName="dnsmasq-dns" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.768939 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="edfb6012-77c8-4a57-b217-19089c4a9d17" containerName="dnsmasq-dns" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.768955 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb62506f-d5a5-44b0-8da3-125128211e10" containerName="cinder-db-sync" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.768988 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="574d5244-06f3-49f9-b8b8-93bd57d4fc35" containerName="ironic-db-sync" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.775040 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.777726 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.777982 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.779512 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-jlpdr" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.787862 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.788105 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/574d5244-06f3-49f9-b8b8-93bd57d4fc35-config-data" (OuterVolumeSpecName: "config-data") pod "574d5244-06f3-49f9-b8b8-93bd57d4fc35" (UID: "574d5244-06f3-49f9-b8b8-93bd57d4fc35"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.808426 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.822479 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/574d5244-06f3-49f9-b8b8-93bd57d4fc35-scripts\") pod \"574d5244-06f3-49f9-b8b8-93bd57d4fc35\" (UID: \"574d5244-06f3-49f9-b8b8-93bd57d4fc35\") " Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.822521 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xphhh\" (UniqueName: \"kubernetes.io/projected/574d5244-06f3-49f9-b8b8-93bd57d4fc35-kube-api-access-xphhh\") pod \"574d5244-06f3-49f9-b8b8-93bd57d4fc35\" (UID: \"574d5244-06f3-49f9-b8b8-93bd57d4fc35\") " Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.822541 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/574d5244-06f3-49f9-b8b8-93bd57d4fc35-config-data-merged\") pod \"574d5244-06f3-49f9-b8b8-93bd57d4fc35\" (UID: \"574d5244-06f3-49f9-b8b8-93bd57d4fc35\") " Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.822597 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/574d5244-06f3-49f9-b8b8-93bd57d4fc35-combined-ca-bundle\") pod \"574d5244-06f3-49f9-b8b8-93bd57d4fc35\" (UID: \"574d5244-06f3-49f9-b8b8-93bd57d4fc35\") " Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.822942 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/574d5244-06f3-49f9-b8b8-93bd57d4fc35-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.822953 4689 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/574d5244-06f3-49f9-b8b8-93bd57d4fc35-etc-podinfo\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.824313 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/574d5244-06f3-49f9-b8b8-93bd57d4fc35-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "574d5244-06f3-49f9-b8b8-93bd57d4fc35" (UID: "574d5244-06f3-49f9-b8b8-93bd57d4fc35"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.833026 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/574d5244-06f3-49f9-b8b8-93bd57d4fc35-kube-api-access-xphhh" (OuterVolumeSpecName: "kube-api-access-xphhh") pod "574d5244-06f3-49f9-b8b8-93bd57d4fc35" (UID: "574d5244-06f3-49f9-b8b8-93bd57d4fc35"). InnerVolumeSpecName "kube-api-access-xphhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.839447 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-qlcgw"] Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.841881 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/574d5244-06f3-49f9-b8b8-93bd57d4fc35-scripts" (OuterVolumeSpecName: "scripts") pod "574d5244-06f3-49f9-b8b8-93bd57d4fc35" (UID: "574d5244-06f3-49f9-b8b8-93bd57d4fc35"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.849501 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-qlcgw" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.852059 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-qlcgw"] Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.880137 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/574d5244-06f3-49f9-b8b8-93bd57d4fc35-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "574d5244-06f3-49f9-b8b8-93bd57d4fc35" (UID: "574d5244-06f3-49f9-b8b8-93bd57d4fc35"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.925197 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06cf7890-6267-472d-a29e-c766c8009eda-dns-svc\") pod \"dnsmasq-dns-5784cf869f-qlcgw\" (UID: \"06cf7890-6267-472d-a29e-c766c8009eda\") " pod="openstack/dnsmasq-dns-5784cf869f-qlcgw" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.925256 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82df75b3-4559-47c4-bc1a-25db8a50e8bf-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"82df75b3-4559-47c4-bc1a-25db8a50e8bf\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.925305 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06cf7890-6267-472d-a29e-c766c8009eda-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-qlcgw\" (UID: \"06cf7890-6267-472d-a29e-c766c8009eda\") " pod="openstack/dnsmasq-dns-5784cf869f-qlcgw" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.925338 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06cf7890-6267-472d-a29e-c766c8009eda-config\") pod \"dnsmasq-dns-5784cf869f-qlcgw\" (UID: \"06cf7890-6267-472d-a29e-c766c8009eda\") " pod="openstack/dnsmasq-dns-5784cf869f-qlcgw" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.925367 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/06cf7890-6267-472d-a29e-c766c8009eda-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-qlcgw\" (UID: \"06cf7890-6267-472d-a29e-c766c8009eda\") " pod="openstack/dnsmasq-dns-5784cf869f-qlcgw" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.925405 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82df75b3-4559-47c4-bc1a-25db8a50e8bf-config-data\") pod \"cinder-scheduler-0\" (UID: \"82df75b3-4559-47c4-bc1a-25db8a50e8bf\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.925436 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-526qm\" (UniqueName: \"kubernetes.io/projected/06cf7890-6267-472d-a29e-c766c8009eda-kube-api-access-526qm\") pod \"dnsmasq-dns-5784cf869f-qlcgw\" (UID: \"06cf7890-6267-472d-a29e-c766c8009eda\") " pod="openstack/dnsmasq-dns-5784cf869f-qlcgw" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.925457 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06cf7890-6267-472d-a29e-c766c8009eda-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-qlcgw\" (UID: \"06cf7890-6267-472d-a29e-c766c8009eda\") " pod="openstack/dnsmasq-dns-5784cf869f-qlcgw" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.925511 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82df75b3-4559-47c4-bc1a-25db8a50e8bf-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"82df75b3-4559-47c4-bc1a-25db8a50e8bf\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.925546 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcwj5\" (UniqueName: \"kubernetes.io/projected/82df75b3-4559-47c4-bc1a-25db8a50e8bf-kube-api-access-bcwj5\") pod \"cinder-scheduler-0\" (UID: \"82df75b3-4559-47c4-bc1a-25db8a50e8bf\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.925578 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82df75b3-4559-47c4-bc1a-25db8a50e8bf-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"82df75b3-4559-47c4-bc1a-25db8a50e8bf\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.925601 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82df75b3-4559-47c4-bc1a-25db8a50e8bf-scripts\") pod \"cinder-scheduler-0\" (UID: \"82df75b3-4559-47c4-bc1a-25db8a50e8bf\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.925668 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/574d5244-06f3-49f9-b8b8-93bd57d4fc35-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.925684 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xphhh\" (UniqueName: \"kubernetes.io/projected/574d5244-06f3-49f9-b8b8-93bd57d4fc35-kube-api-access-xphhh\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.925698 4689 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/574d5244-06f3-49f9-b8b8-93bd57d4fc35-config-data-merged\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:39 crc kubenswrapper[4689]: I1210 12:35:39.925710 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/574d5244-06f3-49f9-b8b8-93bd57d4fc35-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.027381 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.028013 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82df75b3-4559-47c4-bc1a-25db8a50e8bf-config-data\") pod \"cinder-scheduler-0\" (UID: \"82df75b3-4559-47c4-bc1a-25db8a50e8bf\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.028057 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-526qm\" (UniqueName: \"kubernetes.io/projected/06cf7890-6267-472d-a29e-c766c8009eda-kube-api-access-526qm\") pod \"dnsmasq-dns-5784cf869f-qlcgw\" (UID: \"06cf7890-6267-472d-a29e-c766c8009eda\") " pod="openstack/dnsmasq-dns-5784cf869f-qlcgw" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.028077 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06cf7890-6267-472d-a29e-c766c8009eda-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-qlcgw\" (UID: \"06cf7890-6267-472d-a29e-c766c8009eda\") " pod="openstack/dnsmasq-dns-5784cf869f-qlcgw" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.028117 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82df75b3-4559-47c4-bc1a-25db8a50e8bf-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"82df75b3-4559-47c4-bc1a-25db8a50e8bf\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.028142 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcwj5\" (UniqueName: \"kubernetes.io/projected/82df75b3-4559-47c4-bc1a-25db8a50e8bf-kube-api-access-bcwj5\") pod \"cinder-scheduler-0\" (UID: \"82df75b3-4559-47c4-bc1a-25db8a50e8bf\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.028164 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82df75b3-4559-47c4-bc1a-25db8a50e8bf-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"82df75b3-4559-47c4-bc1a-25db8a50e8bf\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.028180 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82df75b3-4559-47c4-bc1a-25db8a50e8bf-scripts\") pod \"cinder-scheduler-0\" (UID: \"82df75b3-4559-47c4-bc1a-25db8a50e8bf\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.028211 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06cf7890-6267-472d-a29e-c766c8009eda-dns-svc\") pod \"dnsmasq-dns-5784cf869f-qlcgw\" (UID: \"06cf7890-6267-472d-a29e-c766c8009eda\") " pod="openstack/dnsmasq-dns-5784cf869f-qlcgw" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.028234 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82df75b3-4559-47c4-bc1a-25db8a50e8bf-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"82df75b3-4559-47c4-bc1a-25db8a50e8bf\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.028261 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06cf7890-6267-472d-a29e-c766c8009eda-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-qlcgw\" (UID: \"06cf7890-6267-472d-a29e-c766c8009eda\") " pod="openstack/dnsmasq-dns-5784cf869f-qlcgw" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.028280 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06cf7890-6267-472d-a29e-c766c8009eda-config\") pod \"dnsmasq-dns-5784cf869f-qlcgw\" (UID: \"06cf7890-6267-472d-a29e-c766c8009eda\") " pod="openstack/dnsmasq-dns-5784cf869f-qlcgw" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.028298 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/06cf7890-6267-472d-a29e-c766c8009eda-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-qlcgw\" (UID: \"06cf7890-6267-472d-a29e-c766c8009eda\") " pod="openstack/dnsmasq-dns-5784cf869f-qlcgw" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.029048 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/06cf7890-6267-472d-a29e-c766c8009eda-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-qlcgw\" (UID: \"06cf7890-6267-472d-a29e-c766c8009eda\") " pod="openstack/dnsmasq-dns-5784cf869f-qlcgw" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.029783 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06cf7890-6267-472d-a29e-c766c8009eda-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-qlcgw\" (UID: \"06cf7890-6267-472d-a29e-c766c8009eda\") " pod="openstack/dnsmasq-dns-5784cf869f-qlcgw" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.029824 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82df75b3-4559-47c4-bc1a-25db8a50e8bf-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"82df75b3-4559-47c4-bc1a-25db8a50e8bf\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.029960 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.033493 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82df75b3-4559-47c4-bc1a-25db8a50e8bf-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"82df75b3-4559-47c4-bc1a-25db8a50e8bf\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.034136 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06cf7890-6267-472d-a29e-c766c8009eda-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-qlcgw\" (UID: \"06cf7890-6267-472d-a29e-c766c8009eda\") " pod="openstack/dnsmasq-dns-5784cf869f-qlcgw" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.034252 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06cf7890-6267-472d-a29e-c766c8009eda-dns-svc\") pod \"dnsmasq-dns-5784cf869f-qlcgw\" (UID: \"06cf7890-6267-472d-a29e-c766c8009eda\") " pod="openstack/dnsmasq-dns-5784cf869f-qlcgw" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.034641 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06cf7890-6267-472d-a29e-c766c8009eda-config\") pod \"dnsmasq-dns-5784cf869f-qlcgw\" (UID: \"06cf7890-6267-472d-a29e-c766c8009eda\") " pod="openstack/dnsmasq-dns-5784cf869f-qlcgw" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.038460 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82df75b3-4559-47c4-bc1a-25db8a50e8bf-scripts\") pod \"cinder-scheduler-0\" (UID: \"82df75b3-4559-47c4-bc1a-25db8a50e8bf\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.039748 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.044011 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82df75b3-4559-47c4-bc1a-25db8a50e8bf-config-data\") pod \"cinder-scheduler-0\" (UID: \"82df75b3-4559-47c4-bc1a-25db8a50e8bf\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.051398 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.054376 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-526qm\" (UniqueName: \"kubernetes.io/projected/06cf7890-6267-472d-a29e-c766c8009eda-kube-api-access-526qm\") pod \"dnsmasq-dns-5784cf869f-qlcgw\" (UID: \"06cf7890-6267-472d-a29e-c766c8009eda\") " pod="openstack/dnsmasq-dns-5784cf869f-qlcgw" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.055692 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcwj5\" (UniqueName: \"kubernetes.io/projected/82df75b3-4559-47c4-bc1a-25db8a50e8bf-kube-api-access-bcwj5\") pod \"cinder-scheduler-0\" (UID: \"82df75b3-4559-47c4-bc1a-25db8a50e8bf\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.085904 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82df75b3-4559-47c4-bc1a-25db8a50e8bf-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"82df75b3-4559-47c4-bc1a-25db8a50e8bf\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.149375 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-jz2g4" event={"ID":"574d5244-06f3-49f9-b8b8-93bd57d4fc35","Type":"ContainerDied","Data":"f13285d3307bd7bd46a281f37cb8a619c1b3c5f14efa71162b8cb468e43ae83e"} Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.149501 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f13285d3307bd7bd46a281f37cb8a619c1b3c5f14efa71162b8cb468e43ae83e" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.149639 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-jz2g4" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.167456 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.184063 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74c576f5cb-ljzfq" event={"ID":"159bb08c-9220-4d78-9b24-4b8293139a23","Type":"ContainerStarted","Data":"3280a2838e24dba476df121ca6e43e889591573fef3fed8a04e2354cb4b30037"} Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.184103 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-74c576f5cb-ljzfq" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.184114 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-74c576f5cb-ljzfq" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.184123 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74c576f5cb-ljzfq" event={"ID":"159bb08c-9220-4d78-9b24-4b8293139a23","Type":"ContainerStarted","Data":"3c9a831e5187c93963c6a8b44755e5faacdc03040a7d7f672208f1a873cbadad"} Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.184472 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-qlcgw" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.209443 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-74c576f5cb-ljzfq" podStartSLOduration=7.209425734 podStartE2EDuration="7.209425734s" podCreationTimestamp="2025-12-10 12:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:35:40.208089061 +0000 UTC m=+1207.996170199" watchObservedRunningTime="2025-12-10 12:35:40.209425734 +0000 UTC m=+1207.997506872" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.235155 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/537ef4c8-28a0-48c0-8392-d7d6252670f2-config-data\") pod \"cinder-api-0\" (UID: \"537ef4c8-28a0-48c0-8392-d7d6252670f2\") " pod="openstack/cinder-api-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.235219 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/537ef4c8-28a0-48c0-8392-d7d6252670f2-scripts\") pod \"cinder-api-0\" (UID: \"537ef4c8-28a0-48c0-8392-d7d6252670f2\") " pod="openstack/cinder-api-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.235280 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/537ef4c8-28a0-48c0-8392-d7d6252670f2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"537ef4c8-28a0-48c0-8392-d7d6252670f2\") " pod="openstack/cinder-api-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.235324 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8qbg\" (UniqueName: \"kubernetes.io/projected/537ef4c8-28a0-48c0-8392-d7d6252670f2-kube-api-access-g8qbg\") pod \"cinder-api-0\" (UID: \"537ef4c8-28a0-48c0-8392-d7d6252670f2\") " pod="openstack/cinder-api-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.235353 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/537ef4c8-28a0-48c0-8392-d7d6252670f2-logs\") pod \"cinder-api-0\" (UID: \"537ef4c8-28a0-48c0-8392-d7d6252670f2\") " pod="openstack/cinder-api-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.236116 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537ef4c8-28a0-48c0-8392-d7d6252670f2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"537ef4c8-28a0-48c0-8392-d7d6252670f2\") " pod="openstack/cinder-api-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.236212 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/537ef4c8-28a0-48c0-8392-d7d6252670f2-config-data-custom\") pod \"cinder-api-0\" (UID: \"537ef4c8-28a0-48c0-8392-d7d6252670f2\") " pod="openstack/cinder-api-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.338835 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/537ef4c8-28a0-48c0-8392-d7d6252670f2-logs\") pod \"cinder-api-0\" (UID: \"537ef4c8-28a0-48c0-8392-d7d6252670f2\") " pod="openstack/cinder-api-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.339142 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537ef4c8-28a0-48c0-8392-d7d6252670f2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"537ef4c8-28a0-48c0-8392-d7d6252670f2\") " pod="openstack/cinder-api-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.339169 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/537ef4c8-28a0-48c0-8392-d7d6252670f2-config-data-custom\") pod \"cinder-api-0\" (UID: \"537ef4c8-28a0-48c0-8392-d7d6252670f2\") " pod="openstack/cinder-api-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.339204 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/537ef4c8-28a0-48c0-8392-d7d6252670f2-config-data\") pod \"cinder-api-0\" (UID: \"537ef4c8-28a0-48c0-8392-d7d6252670f2\") " pod="openstack/cinder-api-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.339244 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/537ef4c8-28a0-48c0-8392-d7d6252670f2-scripts\") pod \"cinder-api-0\" (UID: \"537ef4c8-28a0-48c0-8392-d7d6252670f2\") " pod="openstack/cinder-api-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.339324 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/537ef4c8-28a0-48c0-8392-d7d6252670f2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"537ef4c8-28a0-48c0-8392-d7d6252670f2\") " pod="openstack/cinder-api-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.339382 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8qbg\" (UniqueName: \"kubernetes.io/projected/537ef4c8-28a0-48c0-8392-d7d6252670f2-kube-api-access-g8qbg\") pod \"cinder-api-0\" (UID: \"537ef4c8-28a0-48c0-8392-d7d6252670f2\") " pod="openstack/cinder-api-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.340250 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/537ef4c8-28a0-48c0-8392-d7d6252670f2-logs\") pod \"cinder-api-0\" (UID: \"537ef4c8-28a0-48c0-8392-d7d6252670f2\") " pod="openstack/cinder-api-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.344394 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/537ef4c8-28a0-48c0-8392-d7d6252670f2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"537ef4c8-28a0-48c0-8392-d7d6252670f2\") " pod="openstack/cinder-api-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.349144 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/537ef4c8-28a0-48c0-8392-d7d6252670f2-scripts\") pod \"cinder-api-0\" (UID: \"537ef4c8-28a0-48c0-8392-d7d6252670f2\") " pod="openstack/cinder-api-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.349532 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/537ef4c8-28a0-48c0-8392-d7d6252670f2-config-data-custom\") pod \"cinder-api-0\" (UID: \"537ef4c8-28a0-48c0-8392-d7d6252670f2\") " pod="openstack/cinder-api-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.355996 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537ef4c8-28a0-48c0-8392-d7d6252670f2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"537ef4c8-28a0-48c0-8392-d7d6252670f2\") " pod="openstack/cinder-api-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.414792 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/537ef4c8-28a0-48c0-8392-d7d6252670f2-config-data\") pod \"cinder-api-0\" (UID: \"537ef4c8-28a0-48c0-8392-d7d6252670f2\") " pod="openstack/cinder-api-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.434587 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8qbg\" (UniqueName: \"kubernetes.io/projected/537ef4c8-28a0-48c0-8392-d7d6252670f2-kube-api-access-g8qbg\") pod \"cinder-api-0\" (UID: \"537ef4c8-28a0-48c0-8392-d7d6252670f2\") " pod="openstack/cinder-api-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.537632 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edfb6012-77c8-4a57-b217-19089c4a9d17" path="/var/lib/kubelet/pods/edfb6012-77c8-4a57-b217-19089c4a9d17/volumes" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.538392 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-create-zrpxl"] Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.539683 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-zrpxl" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.553173 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-neutron-agent-6f4566d7bf-hkx2g"] Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.554650 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-6f4566d7bf-hkx2g" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.560009 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-zrpxl"] Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.560819 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-ironic-neutron-agent-config-data" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.562328 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af27799e-026b-4c74-83f1-1336db02850f-operator-scripts\") pod \"ironic-inspector-db-create-zrpxl\" (UID: \"af27799e-026b-4c74-83f1-1336db02850f\") " pod="openstack/ironic-inspector-db-create-zrpxl" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.562464 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fngts\" (UniqueName: \"kubernetes.io/projected/af27799e-026b-4c74-83f1-1336db02850f-kube-api-access-fngts\") pod \"ironic-inspector-db-create-zrpxl\" (UID: \"af27799e-026b-4c74-83f1-1336db02850f\") " pod="openstack/ironic-inspector-db-create-zrpxl" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.571406 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-6f4566d7bf-hkx2g"] Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.572375 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-ironic-dockercfg-2wctm" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.641623 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-c6a0-account-create-update-sfzxs"] Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.643416 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-c6a0-account-create-update-sfzxs" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.645472 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-db-secret" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.665286 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-c6a0-account-create-update-sfzxs"] Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.673880 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/234f8267-1974-4f9e-9d13-8a239ff2660c-combined-ca-bundle\") pod \"ironic-neutron-agent-6f4566d7bf-hkx2g\" (UID: \"234f8267-1974-4f9e-9d13-8a239ff2660c\") " pod="openstack/ironic-neutron-agent-6f4566d7bf-hkx2g" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.674009 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1ba82e4-cee0-4da9-a138-f860c8f1e274-operator-scripts\") pod \"ironic-inspector-c6a0-account-create-update-sfzxs\" (UID: \"d1ba82e4-cee0-4da9-a138-f860c8f1e274\") " pod="openstack/ironic-inspector-c6a0-account-create-update-sfzxs" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.674042 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fngts\" (UniqueName: \"kubernetes.io/projected/af27799e-026b-4c74-83f1-1336db02850f-kube-api-access-fngts\") pod \"ironic-inspector-db-create-zrpxl\" (UID: \"af27799e-026b-4c74-83f1-1336db02850f\") " pod="openstack/ironic-inspector-db-create-zrpxl" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.674072 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k558z\" (UniqueName: \"kubernetes.io/projected/d1ba82e4-cee0-4da9-a138-f860c8f1e274-kube-api-access-k558z\") pod \"ironic-inspector-c6a0-account-create-update-sfzxs\" (UID: \"d1ba82e4-cee0-4da9-a138-f860c8f1e274\") " pod="openstack/ironic-inspector-c6a0-account-create-update-sfzxs" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.674218 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af27799e-026b-4c74-83f1-1336db02850f-operator-scripts\") pod \"ironic-inspector-db-create-zrpxl\" (UID: \"af27799e-026b-4c74-83f1-1336db02850f\") " pod="openstack/ironic-inspector-db-create-zrpxl" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.674266 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg2v9\" (UniqueName: \"kubernetes.io/projected/234f8267-1974-4f9e-9d13-8a239ff2660c-kube-api-access-vg2v9\") pod \"ironic-neutron-agent-6f4566d7bf-hkx2g\" (UID: \"234f8267-1974-4f9e-9d13-8a239ff2660c\") " pod="openstack/ironic-neutron-agent-6f4566d7bf-hkx2g" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.674285 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/234f8267-1974-4f9e-9d13-8a239ff2660c-config\") pod \"ironic-neutron-agent-6f4566d7bf-hkx2g\" (UID: \"234f8267-1974-4f9e-9d13-8a239ff2660c\") " pod="openstack/ironic-neutron-agent-6f4566d7bf-hkx2g" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.674967 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af27799e-026b-4c74-83f1-1336db02850f-operator-scripts\") pod \"ironic-inspector-db-create-zrpxl\" (UID: \"af27799e-026b-4c74-83f1-1336db02850f\") " pod="openstack/ironic-inspector-db-create-zrpxl" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.723678 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fngts\" (UniqueName: \"kubernetes.io/projected/af27799e-026b-4c74-83f1-1336db02850f-kube-api-access-fngts\") pod \"ironic-inspector-db-create-zrpxl\" (UID: \"af27799e-026b-4c74-83f1-1336db02850f\") " pod="openstack/ironic-inspector-db-create-zrpxl" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.723798 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.776520 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg2v9\" (UniqueName: \"kubernetes.io/projected/234f8267-1974-4f9e-9d13-8a239ff2660c-kube-api-access-vg2v9\") pod \"ironic-neutron-agent-6f4566d7bf-hkx2g\" (UID: \"234f8267-1974-4f9e-9d13-8a239ff2660c\") " pod="openstack/ironic-neutron-agent-6f4566d7bf-hkx2g" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.776572 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/234f8267-1974-4f9e-9d13-8a239ff2660c-config\") pod \"ironic-neutron-agent-6f4566d7bf-hkx2g\" (UID: \"234f8267-1974-4f9e-9d13-8a239ff2660c\") " pod="openstack/ironic-neutron-agent-6f4566d7bf-hkx2g" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.776620 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/234f8267-1974-4f9e-9d13-8a239ff2660c-combined-ca-bundle\") pod \"ironic-neutron-agent-6f4566d7bf-hkx2g\" (UID: \"234f8267-1974-4f9e-9d13-8a239ff2660c\") " pod="openstack/ironic-neutron-agent-6f4566d7bf-hkx2g" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.776680 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1ba82e4-cee0-4da9-a138-f860c8f1e274-operator-scripts\") pod \"ironic-inspector-c6a0-account-create-update-sfzxs\" (UID: \"d1ba82e4-cee0-4da9-a138-f860c8f1e274\") " pod="openstack/ironic-inspector-c6a0-account-create-update-sfzxs" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.776725 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k558z\" (UniqueName: \"kubernetes.io/projected/d1ba82e4-cee0-4da9-a138-f860c8f1e274-kube-api-access-k558z\") pod \"ironic-inspector-c6a0-account-create-update-sfzxs\" (UID: \"d1ba82e4-cee0-4da9-a138-f860c8f1e274\") " pod="openstack/ironic-inspector-c6a0-account-create-update-sfzxs" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.777839 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1ba82e4-cee0-4da9-a138-f860c8f1e274-operator-scripts\") pod \"ironic-inspector-c6a0-account-create-update-sfzxs\" (UID: \"d1ba82e4-cee0-4da9-a138-f860c8f1e274\") " pod="openstack/ironic-inspector-c6a0-account-create-update-sfzxs" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.783319 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/234f8267-1974-4f9e-9d13-8a239ff2660c-combined-ca-bundle\") pod \"ironic-neutron-agent-6f4566d7bf-hkx2g\" (UID: \"234f8267-1974-4f9e-9d13-8a239ff2660c\") " pod="openstack/ironic-neutron-agent-6f4566d7bf-hkx2g" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.796508 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/234f8267-1974-4f9e-9d13-8a239ff2660c-config\") pod \"ironic-neutron-agent-6f4566d7bf-hkx2g\" (UID: \"234f8267-1974-4f9e-9d13-8a239ff2660c\") " pod="openstack/ironic-neutron-agent-6f4566d7bf-hkx2g" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.816011 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg2v9\" (UniqueName: \"kubernetes.io/projected/234f8267-1974-4f9e-9d13-8a239ff2660c-kube-api-access-vg2v9\") pod \"ironic-neutron-agent-6f4566d7bf-hkx2g\" (UID: \"234f8267-1974-4f9e-9d13-8a239ff2660c\") " pod="openstack/ironic-neutron-agent-6f4566d7bf-hkx2g" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.819680 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k558z\" (UniqueName: \"kubernetes.io/projected/d1ba82e4-cee0-4da9-a138-f860c8f1e274-kube-api-access-k558z\") pod \"ironic-inspector-c6a0-account-create-update-sfzxs\" (UID: \"d1ba82e4-cee0-4da9-a138-f860c8f1e274\") " pod="openstack/ironic-inspector-c6a0-account-create-update-sfzxs" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.821494 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-6d98945548-q82m5"] Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.841684 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-6d98945548-q82m5" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.849590 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.849780 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-config-data" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.851245 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-scripts" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.851369 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.878374 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-scripts\") pod \"ironic-6d98945548-q82m5\" (UID: \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\") " pod="openstack/ironic-6d98945548-q82m5" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.878447 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-etc-podinfo\") pod \"ironic-6d98945548-q82m5\" (UID: \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\") " pod="openstack/ironic-6d98945548-q82m5" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.878482 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-config-data-custom\") pod \"ironic-6d98945548-q82m5\" (UID: \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\") " pod="openstack/ironic-6d98945548-q82m5" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.878531 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c52zm\" (UniqueName: \"kubernetes.io/projected/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-kube-api-access-c52zm\") pod \"ironic-6d98945548-q82m5\" (UID: \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\") " pod="openstack/ironic-6d98945548-q82m5" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.878553 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-config-data\") pod \"ironic-6d98945548-q82m5\" (UID: \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\") " pod="openstack/ironic-6d98945548-q82m5" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.878573 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-combined-ca-bundle\") pod \"ironic-6d98945548-q82m5\" (UID: \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\") " pod="openstack/ironic-6d98945548-q82m5" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.883188 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-zrpxl" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.887209 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-logs\") pod \"ironic-6d98945548-q82m5\" (UID: \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\") " pod="openstack/ironic-6d98945548-q82m5" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.887444 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-config-data-merged\") pod \"ironic-6d98945548-q82m5\" (UID: \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\") " pod="openstack/ironic-6d98945548-q82m5" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.890143 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-6d98945548-q82m5"] Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.913067 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.916667 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-6f4566d7bf-hkx2g" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.989450 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-c6a0-account-create-update-sfzxs" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.991997 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c52zm\" (UniqueName: \"kubernetes.io/projected/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-kube-api-access-c52zm\") pod \"ironic-6d98945548-q82m5\" (UID: \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\") " pod="openstack/ironic-6d98945548-q82m5" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.992051 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-config-data\") pod \"ironic-6d98945548-q82m5\" (UID: \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\") " pod="openstack/ironic-6d98945548-q82m5" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.992075 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-combined-ca-bundle\") pod \"ironic-6d98945548-q82m5\" (UID: \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\") " pod="openstack/ironic-6d98945548-q82m5" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.992094 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-logs\") pod \"ironic-6d98945548-q82m5\" (UID: \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\") " pod="openstack/ironic-6d98945548-q82m5" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.992118 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-config-data-merged\") pod \"ironic-6d98945548-q82m5\" (UID: \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\") " pod="openstack/ironic-6d98945548-q82m5" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.992200 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-scripts\") pod \"ironic-6d98945548-q82m5\" (UID: \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\") " pod="openstack/ironic-6d98945548-q82m5" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.992243 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-etc-podinfo\") pod \"ironic-6d98945548-q82m5\" (UID: \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\") " pod="openstack/ironic-6d98945548-q82m5" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.992260 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-config-data-custom\") pod \"ironic-6d98945548-q82m5\" (UID: \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\") " pod="openstack/ironic-6d98945548-q82m5" Dec 10 12:35:40 crc kubenswrapper[4689]: I1210 12:35:40.993464 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-logs\") pod \"ironic-6d98945548-q82m5\" (UID: \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\") " pod="openstack/ironic-6d98945548-q82m5" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.002815 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-config-data\") pod \"ironic-6d98945548-q82m5\" (UID: \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\") " pod="openstack/ironic-6d98945548-q82m5" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.005543 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-config-data-merged\") pod \"ironic-6d98945548-q82m5\" (UID: \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\") " pod="openstack/ironic-6d98945548-q82m5" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.008045 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-scripts\") pod \"ironic-6d98945548-q82m5\" (UID: \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\") " pod="openstack/ironic-6d98945548-q82m5" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.010863 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-combined-ca-bundle\") pod \"ironic-6d98945548-q82m5\" (UID: \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\") " pod="openstack/ironic-6d98945548-q82m5" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.012226 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c52zm\" (UniqueName: \"kubernetes.io/projected/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-kube-api-access-c52zm\") pod \"ironic-6d98945548-q82m5\" (UID: \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\") " pod="openstack/ironic-6d98945548-q82m5" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.014859 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-config-data-custom\") pod \"ironic-6d98945548-q82m5\" (UID: \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\") " pod="openstack/ironic-6d98945548-q82m5" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.022362 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-etc-podinfo\") pod \"ironic-6d98945548-q82m5\" (UID: \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\") " pod="openstack/ironic-6d98945548-q82m5" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.086813 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-qlcgw"] Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.184105 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-6d98945548-q82m5" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.199752 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-qlcgw" event={"ID":"06cf7890-6267-472d-a29e-c766c8009eda","Type":"ContainerStarted","Data":"0506722aa532b03763435ce21c8e92c1b7b188fca24690fc5e6df2cec0d51c7b"} Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.201487 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"82df75b3-4559-47c4-bc1a-25db8a50e8bf","Type":"ContainerStarted","Data":"e68260f12d21de1d6d95efa31bffbfc36f2e8cc911fc1bcdba7a7a673816ef8a"} Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.295280 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.583530 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-conductor-0"] Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.587726 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.598873 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-scripts" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.599259 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-config-data" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.602194 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23e46f1d-5919-4baa-aeef-1364104b63fb-scripts\") pod \"ironic-conductor-0\" (UID: \"23e46f1d-5919-4baa-aeef-1364104b63fb\") " pod="openstack/ironic-conductor-0" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.602279 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e46f1d-5919-4baa-aeef-1364104b63fb-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"23e46f1d-5919-4baa-aeef-1364104b63fb\") " pod="openstack/ironic-conductor-0" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.602305 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/23e46f1d-5919-4baa-aeef-1364104b63fb-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"23e46f1d-5919-4baa-aeef-1364104b63fb\") " pod="openstack/ironic-conductor-0" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.602328 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23e46f1d-5919-4baa-aeef-1364104b63fb-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"23e46f1d-5919-4baa-aeef-1364104b63fb\") " pod="openstack/ironic-conductor-0" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.602349 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h4mz\" (UniqueName: \"kubernetes.io/projected/23e46f1d-5919-4baa-aeef-1364104b63fb-kube-api-access-5h4mz\") pod \"ironic-conductor-0\" (UID: \"23e46f1d-5919-4baa-aeef-1364104b63fb\") " pod="openstack/ironic-conductor-0" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.602393 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/23e46f1d-5919-4baa-aeef-1364104b63fb-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"23e46f1d-5919-4baa-aeef-1364104b63fb\") " pod="openstack/ironic-conductor-0" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.602416 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23e46f1d-5919-4baa-aeef-1364104b63fb-config-data\") pod \"ironic-conductor-0\" (UID: \"23e46f1d-5919-4baa-aeef-1364104b63fb\") " pod="openstack/ironic-conductor-0" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.602500 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ironic-conductor-0\" (UID: \"23e46f1d-5919-4baa-aeef-1364104b63fb\") " pod="openstack/ironic-conductor-0" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.610588 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.658060 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-6f4566d7bf-hkx2g"] Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.659984 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-zrpxl"] Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.704726 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/23e46f1d-5919-4baa-aeef-1364104b63fb-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"23e46f1d-5919-4baa-aeef-1364104b63fb\") " pod="openstack/ironic-conductor-0" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.704763 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23e46f1d-5919-4baa-aeef-1364104b63fb-config-data\") pod \"ironic-conductor-0\" (UID: \"23e46f1d-5919-4baa-aeef-1364104b63fb\") " pod="openstack/ironic-conductor-0" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.704848 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ironic-conductor-0\" (UID: \"23e46f1d-5919-4baa-aeef-1364104b63fb\") " pod="openstack/ironic-conductor-0" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.704886 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23e46f1d-5919-4baa-aeef-1364104b63fb-scripts\") pod \"ironic-conductor-0\" (UID: \"23e46f1d-5919-4baa-aeef-1364104b63fb\") " pod="openstack/ironic-conductor-0" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.704925 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e46f1d-5919-4baa-aeef-1364104b63fb-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"23e46f1d-5919-4baa-aeef-1364104b63fb\") " pod="openstack/ironic-conductor-0" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.704943 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/23e46f1d-5919-4baa-aeef-1364104b63fb-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"23e46f1d-5919-4baa-aeef-1364104b63fb\") " pod="openstack/ironic-conductor-0" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.704965 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23e46f1d-5919-4baa-aeef-1364104b63fb-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"23e46f1d-5919-4baa-aeef-1364104b63fb\") " pod="openstack/ironic-conductor-0" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.704998 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h4mz\" (UniqueName: \"kubernetes.io/projected/23e46f1d-5919-4baa-aeef-1364104b63fb-kube-api-access-5h4mz\") pod \"ironic-conductor-0\" (UID: \"23e46f1d-5919-4baa-aeef-1364104b63fb\") " pod="openstack/ironic-conductor-0" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.706205 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/23e46f1d-5919-4baa-aeef-1364104b63fb-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"23e46f1d-5919-4baa-aeef-1364104b63fb\") " pod="openstack/ironic-conductor-0" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.707813 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ironic-conductor-0\" (UID: \"23e46f1d-5919-4baa-aeef-1364104b63fb\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ironic-conductor-0" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.713792 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/23e46f1d-5919-4baa-aeef-1364104b63fb-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"23e46f1d-5919-4baa-aeef-1364104b63fb\") " pod="openstack/ironic-conductor-0" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.715764 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23e46f1d-5919-4baa-aeef-1364104b63fb-config-data\") pod \"ironic-conductor-0\" (UID: \"23e46f1d-5919-4baa-aeef-1364104b63fb\") " pod="openstack/ironic-conductor-0" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.717509 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23e46f1d-5919-4baa-aeef-1364104b63fb-scripts\") pod \"ironic-conductor-0\" (UID: \"23e46f1d-5919-4baa-aeef-1364104b63fb\") " pod="openstack/ironic-conductor-0" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.721493 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23e46f1d-5919-4baa-aeef-1364104b63fb-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"23e46f1d-5919-4baa-aeef-1364104b63fb\") " pod="openstack/ironic-conductor-0" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.724187 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h4mz\" (UniqueName: \"kubernetes.io/projected/23e46f1d-5919-4baa-aeef-1364104b63fb-kube-api-access-5h4mz\") pod \"ironic-conductor-0\" (UID: \"23e46f1d-5919-4baa-aeef-1364104b63fb\") " pod="openstack/ironic-conductor-0" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.738585 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e46f1d-5919-4baa-aeef-1364104b63fb-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"23e46f1d-5919-4baa-aeef-1364104b63fb\") " pod="openstack/ironic-conductor-0" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.781058 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ironic-conductor-0\" (UID: \"23e46f1d-5919-4baa-aeef-1364104b63fb\") " pod="openstack/ironic-conductor-0" Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.804893 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-c6a0-account-create-update-sfzxs"] Dec 10 12:35:41 crc kubenswrapper[4689]: W1210 12:35:41.884871 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf27799e_026b_4c74_83f1_1336db02850f.slice/crio-c0e47d93381afd7972622ecbfa4e4500b5e2967309362dc329ff4fc283b40146 WatchSource:0}: Error finding container c0e47d93381afd7972622ecbfa4e4500b5e2967309362dc329ff4fc283b40146: Status 404 returned error can't find the container with id c0e47d93381afd7972622ecbfa4e4500b5e2967309362dc329ff4fc283b40146 Dec 10 12:35:41 crc kubenswrapper[4689]: W1210 12:35:41.885368 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1ba82e4_cee0_4da9_a138_f860c8f1e274.slice/crio-cc5d338b5a03d6e97105ca1d1ad92ca433103036964e24c38b7cabb9bbb8e360 WatchSource:0}: Error finding container cc5d338b5a03d6e97105ca1d1ad92ca433103036964e24c38b7cabb9bbb8e360: Status 404 returned error can't find the container with id cc5d338b5a03d6e97105ca1d1ad92ca433103036964e24c38b7cabb9bbb8e360 Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.907631 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-6d98945548-q82m5"] Dec 10 12:35:41 crc kubenswrapper[4689]: I1210 12:35:41.913927 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Dec 10 12:35:42 crc kubenswrapper[4689]: I1210 12:35:42.222129 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-c6a0-account-create-update-sfzxs" event={"ID":"d1ba82e4-cee0-4da9-a138-f860c8f1e274","Type":"ContainerStarted","Data":"cc5d338b5a03d6e97105ca1d1ad92ca433103036964e24c38b7cabb9bbb8e360"} Dec 10 12:35:42 crc kubenswrapper[4689]: I1210 12:35:42.226894 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6d98945548-q82m5" event={"ID":"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3","Type":"ContainerStarted","Data":"47021a7e07c2d775080d77f7324cbf146810931f0d082bc250f7f5d65e16bb2e"} Dec 10 12:35:42 crc kubenswrapper[4689]: I1210 12:35:42.232668 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"537ef4c8-28a0-48c0-8392-d7d6252670f2","Type":"ContainerStarted","Data":"2f65197d460bb93c7a7c189d8e788425480d588db8c12131ca7aa6ac26fab2cd"} Dec 10 12:35:42 crc kubenswrapper[4689]: I1210 12:35:42.235593 4689 generic.go:334] "Generic (PLEG): container finished" podID="06cf7890-6267-472d-a29e-c766c8009eda" containerID="267eb284ad06bf70942d1e41a1b7390db2de83ab535dd573be01e0fed5a24c22" exitCode=0 Dec 10 12:35:42 crc kubenswrapper[4689]: I1210 12:35:42.235654 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-qlcgw" event={"ID":"06cf7890-6267-472d-a29e-c766c8009eda","Type":"ContainerDied","Data":"267eb284ad06bf70942d1e41a1b7390db2de83ab535dd573be01e0fed5a24c22"} Dec 10 12:35:42 crc kubenswrapper[4689]: I1210 12:35:42.242913 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-zrpxl" event={"ID":"af27799e-026b-4c74-83f1-1336db02850f","Type":"ContainerStarted","Data":"c0e47d93381afd7972622ecbfa4e4500b5e2967309362dc329ff4fc283b40146"} Dec 10 12:35:42 crc kubenswrapper[4689]: I1210 12:35:42.249114 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-6f4566d7bf-hkx2g" event={"ID":"234f8267-1974-4f9e-9d13-8a239ff2660c","Type":"ContainerStarted","Data":"759c15972f5d182d99e64fe209616f6ac823c0cffe947271b5431191f1141457"} Dec 10 12:35:42 crc kubenswrapper[4689]: I1210 12:35:42.279409 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-db-create-zrpxl" podStartSLOduration=2.279387711 podStartE2EDuration="2.279387711s" podCreationTimestamp="2025-12-10 12:35:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:35:42.267462935 +0000 UTC m=+1210.055544093" watchObservedRunningTime="2025-12-10 12:35:42.279387711 +0000 UTC m=+1210.067468849" Dec 10 12:35:42 crc kubenswrapper[4689]: I1210 12:35:42.430695 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Dec 10 12:35:43 crc kubenswrapper[4689]: I1210 12:35:43.056673 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 10 12:35:43 crc kubenswrapper[4689]: I1210 12:35:43.299166 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"537ef4c8-28a0-48c0-8392-d7d6252670f2","Type":"ContainerStarted","Data":"de24447dad7585acc958c5c2b10db70f639afdd61c754d185222ca81b34aa3c3"} Dec 10 12:35:43 crc kubenswrapper[4689]: I1210 12:35:43.316388 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-qlcgw" event={"ID":"06cf7890-6267-472d-a29e-c766c8009eda","Type":"ContainerStarted","Data":"bf7f61dc7e1f601c004bf9c012e400ab23d27e96d229fb76bb6a067a5b5ea13c"} Dec 10 12:35:43 crc kubenswrapper[4689]: I1210 12:35:43.317079 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5784cf869f-qlcgw" Dec 10 12:35:43 crc kubenswrapper[4689]: I1210 12:35:43.339701 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-zrpxl" event={"ID":"af27799e-026b-4c74-83f1-1336db02850f","Type":"ContainerStarted","Data":"18b19fd36e5055305ff21b5ff5fefa1763ca3ef37008582f7691b33935a74f10"} Dec 10 12:35:43 crc kubenswrapper[4689]: I1210 12:35:43.351717 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5784cf869f-qlcgw" podStartSLOduration=4.351701531 podStartE2EDuration="4.351701531s" podCreationTimestamp="2025-12-10 12:35:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:35:43.351048425 +0000 UTC m=+1211.139129563" watchObservedRunningTime="2025-12-10 12:35:43.351701531 +0000 UTC m=+1211.139782669" Dec 10 12:35:43 crc kubenswrapper[4689]: I1210 12:35:43.357640 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"23e46f1d-5919-4baa-aeef-1364104b63fb","Type":"ContainerStarted","Data":"5dcf9aed871a43c6ad34bd1e013e48b0a1d005bd5854a834567a0162f19b0c81"} Dec 10 12:35:43 crc kubenswrapper[4689]: I1210 12:35:43.375136 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-c6a0-account-create-update-sfzxs" event={"ID":"d1ba82e4-cee0-4da9-a138-f860c8f1e274","Type":"ContainerStarted","Data":"87150d3b6da66440a345aee3c33cceae7adceb98fd9caa4d4dece11f23a91401"} Dec 10 12:35:43 crc kubenswrapper[4689]: I1210 12:35:43.393884 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-c6a0-account-create-update-sfzxs" podStartSLOduration=3.393866708 podStartE2EDuration="3.393866708s" podCreationTimestamp="2025-12-10 12:35:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:35:43.393735805 +0000 UTC m=+1211.181816943" watchObservedRunningTime="2025-12-10 12:35:43.393866708 +0000 UTC m=+1211.181947846" Dec 10 12:35:43 crc kubenswrapper[4689]: E1210 12:35:43.396301 4689 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf27799e_026b_4c74_83f1_1336db02850f.slice/crio-18b19fd36e5055305ff21b5ff5fefa1763ca3ef37008582f7691b33935a74f10.scope\": RecentStats: unable to find data in memory cache]" Dec 10 12:35:44 crc kubenswrapper[4689]: I1210 12:35:44.389834 4689 generic.go:334] "Generic (PLEG): container finished" podID="d1ba82e4-cee0-4da9-a138-f860c8f1e274" containerID="87150d3b6da66440a345aee3c33cceae7adceb98fd9caa4d4dece11f23a91401" exitCode=0 Dec 10 12:35:44 crc kubenswrapper[4689]: I1210 12:35:44.389948 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-c6a0-account-create-update-sfzxs" event={"ID":"d1ba82e4-cee0-4da9-a138-f860c8f1e274","Type":"ContainerDied","Data":"87150d3b6da66440a345aee3c33cceae7adceb98fd9caa4d4dece11f23a91401"} Dec 10 12:35:44 crc kubenswrapper[4689]: I1210 12:35:44.394455 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"537ef4c8-28a0-48c0-8392-d7d6252670f2","Type":"ContainerStarted","Data":"702633101c435ac4df049fd07176f7b2010bb13c8f7d44eebdf6d8fcac7fcf20"} Dec 10 12:35:44 crc kubenswrapper[4689]: I1210 12:35:44.394581 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="537ef4c8-28a0-48c0-8392-d7d6252670f2" containerName="cinder-api-log" containerID="cri-o://de24447dad7585acc958c5c2b10db70f639afdd61c754d185222ca81b34aa3c3" gracePeriod=30 Dec 10 12:35:44 crc kubenswrapper[4689]: I1210 12:35:44.394819 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 10 12:35:44 crc kubenswrapper[4689]: I1210 12:35:44.394858 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="537ef4c8-28a0-48c0-8392-d7d6252670f2" containerName="cinder-api" containerID="cri-o://702633101c435ac4df049fd07176f7b2010bb13c8f7d44eebdf6d8fcac7fcf20" gracePeriod=30 Dec 10 12:35:44 crc kubenswrapper[4689]: I1210 12:35:44.396903 4689 generic.go:334] "Generic (PLEG): container finished" podID="af27799e-026b-4c74-83f1-1336db02850f" containerID="18b19fd36e5055305ff21b5ff5fefa1763ca3ef37008582f7691b33935a74f10" exitCode=0 Dec 10 12:35:44 crc kubenswrapper[4689]: I1210 12:35:44.396965 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-zrpxl" event={"ID":"af27799e-026b-4c74-83f1-1336db02850f","Type":"ContainerDied","Data":"18b19fd36e5055305ff21b5ff5fefa1763ca3ef37008582f7691b33935a74f10"} Dec 10 12:35:44 crc kubenswrapper[4689]: I1210 12:35:44.414566 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"82df75b3-4559-47c4-bc1a-25db8a50e8bf","Type":"ContainerStarted","Data":"1daa894f6e120e11933b91a544ebd3f573cd492f883b879daf51088fcdbe7b65"} Dec 10 12:35:44 crc kubenswrapper[4689]: I1210 12:35:44.414651 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"82df75b3-4559-47c4-bc1a-25db8a50e8bf","Type":"ContainerStarted","Data":"87a779a65ac7cc1b3b571e5eb66b3944ad7307e58627bcc58c0a24808135052c"} Dec 10 12:35:44 crc kubenswrapper[4689]: I1210 12:35:44.423715 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"23e46f1d-5919-4baa-aeef-1364104b63fb","Type":"ContainerStarted","Data":"68f60d991077d7b2b86e3545e9fb8aeb9e79e43ae76d54233f89314a1aaf1fbc"} Dec 10 12:35:44 crc kubenswrapper[4689]: I1210 12:35:44.436559 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.436526731 podStartE2EDuration="4.436526731s" podCreationTimestamp="2025-12-10 12:35:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:35:44.428782269 +0000 UTC m=+1212.216863407" watchObservedRunningTime="2025-12-10 12:35:44.436526731 +0000 UTC m=+1212.224607869" Dec 10 12:35:44 crc kubenswrapper[4689]: I1210 12:35:44.448473 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.374555889 podStartE2EDuration="5.448454618s" podCreationTimestamp="2025-12-10 12:35:39 +0000 UTC" firstStartedPulling="2025-12-10 12:35:40.882225117 +0000 UTC m=+1208.670306255" lastFinishedPulling="2025-12-10 12:35:41.956123846 +0000 UTC m=+1209.744204984" observedRunningTime="2025-12-10 12:35:44.447608076 +0000 UTC m=+1212.235689214" watchObservedRunningTime="2025-12-10 12:35:44.448454618 +0000 UTC m=+1212.236535756" Dec 10 12:35:45 crc kubenswrapper[4689]: I1210 12:35:45.168174 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 10 12:35:45 crc kubenswrapper[4689]: I1210 12:35:45.445629 4689 generic.go:334] "Generic (PLEG): container finished" podID="537ef4c8-28a0-48c0-8392-d7d6252670f2" containerID="702633101c435ac4df049fd07176f7b2010bb13c8f7d44eebdf6d8fcac7fcf20" exitCode=0 Dec 10 12:35:45 crc kubenswrapper[4689]: I1210 12:35:45.445662 4689 generic.go:334] "Generic (PLEG): container finished" podID="537ef4c8-28a0-48c0-8392-d7d6252670f2" containerID="de24447dad7585acc958c5c2b10db70f639afdd61c754d185222ca81b34aa3c3" exitCode=143 Dec 10 12:35:45 crc kubenswrapper[4689]: I1210 12:35:45.445704 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"537ef4c8-28a0-48c0-8392-d7d6252670f2","Type":"ContainerDied","Data":"702633101c435ac4df049fd07176f7b2010bb13c8f7d44eebdf6d8fcac7fcf20"} Dec 10 12:35:45 crc kubenswrapper[4689]: I1210 12:35:45.445728 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"537ef4c8-28a0-48c0-8392-d7d6252670f2","Type":"ContainerDied","Data":"de24447dad7585acc958c5c2b10db70f639afdd61c754d185222ca81b34aa3c3"} Dec 10 12:35:45 crc kubenswrapper[4689]: I1210 12:35:45.449061 4689 generic.go:334] "Generic (PLEG): container finished" podID="23e46f1d-5919-4baa-aeef-1364104b63fb" containerID="68f60d991077d7b2b86e3545e9fb8aeb9e79e43ae76d54233f89314a1aaf1fbc" exitCode=0 Dec 10 12:35:45 crc kubenswrapper[4689]: I1210 12:35:45.449135 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"23e46f1d-5919-4baa-aeef-1364104b63fb","Type":"ContainerDied","Data":"68f60d991077d7b2b86e3545e9fb8aeb9e79e43ae76d54233f89314a1aaf1fbc"} Dec 10 12:35:46 crc kubenswrapper[4689]: I1210 12:35:46.976061 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-zrpxl" Dec 10 12:35:46 crc kubenswrapper[4689]: I1210 12:35:46.982211 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-c6a0-account-create-update-sfzxs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.039650 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-dbcd8ff8b-j52fs"] Dec 10 12:35:47 crc kubenswrapper[4689]: E1210 12:35:47.040337 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1ba82e4-cee0-4da9-a138-f860c8f1e274" containerName="mariadb-account-create-update" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.040356 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1ba82e4-cee0-4da9-a138-f860c8f1e274" containerName="mariadb-account-create-update" Dec 10 12:35:47 crc kubenswrapper[4689]: E1210 12:35:47.040376 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af27799e-026b-4c74-83f1-1336db02850f" containerName="mariadb-database-create" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.040382 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="af27799e-026b-4c74-83f1-1336db02850f" containerName="mariadb-database-create" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.040576 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="af27799e-026b-4c74-83f1-1336db02850f" containerName="mariadb-database-create" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.040592 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1ba82e4-cee0-4da9-a138-f860c8f1e274" containerName="mariadb-account-create-update" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.041895 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.044403 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-public-svc" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.044545 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-internal-svc" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.068500 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k558z\" (UniqueName: \"kubernetes.io/projected/d1ba82e4-cee0-4da9-a138-f860c8f1e274-kube-api-access-k558z\") pod \"d1ba82e4-cee0-4da9-a138-f860c8f1e274\" (UID: \"d1ba82e4-cee0-4da9-a138-f860c8f1e274\") " Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.068675 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fngts\" (UniqueName: \"kubernetes.io/projected/af27799e-026b-4c74-83f1-1336db02850f-kube-api-access-fngts\") pod \"af27799e-026b-4c74-83f1-1336db02850f\" (UID: \"af27799e-026b-4c74-83f1-1336db02850f\") " Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.068725 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af27799e-026b-4c74-83f1-1336db02850f-operator-scripts\") pod \"af27799e-026b-4c74-83f1-1336db02850f\" (UID: \"af27799e-026b-4c74-83f1-1336db02850f\") " Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.068763 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1ba82e4-cee0-4da9-a138-f860c8f1e274-operator-scripts\") pod \"d1ba82e4-cee0-4da9-a138-f860c8f1e274\" (UID: \"d1ba82e4-cee0-4da9-a138-f860c8f1e274\") " Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.068961 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c984e40-d3ee-426e-ab51-c576bc699e11-config-data\") pod \"ironic-dbcd8ff8b-j52fs\" (UID: \"8c984e40-d3ee-426e-ab51-c576bc699e11\") " pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.069025 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/8c984e40-d3ee-426e-ab51-c576bc699e11-config-data-merged\") pod \"ironic-dbcd8ff8b-j52fs\" (UID: \"8c984e40-d3ee-426e-ab51-c576bc699e11\") " pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.069042 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-dbcd8ff8b-j52fs"] Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.069111 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n68m4\" (UniqueName: \"kubernetes.io/projected/8c984e40-d3ee-426e-ab51-c576bc699e11-kube-api-access-n68m4\") pod \"ironic-dbcd8ff8b-j52fs\" (UID: \"8c984e40-d3ee-426e-ab51-c576bc699e11\") " pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.069161 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c984e40-d3ee-426e-ab51-c576bc699e11-internal-tls-certs\") pod \"ironic-dbcd8ff8b-j52fs\" (UID: \"8c984e40-d3ee-426e-ab51-c576bc699e11\") " pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.069188 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/8c984e40-d3ee-426e-ab51-c576bc699e11-etc-podinfo\") pod \"ironic-dbcd8ff8b-j52fs\" (UID: \"8c984e40-d3ee-426e-ab51-c576bc699e11\") " pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.069216 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c984e40-d3ee-426e-ab51-c576bc699e11-scripts\") pod \"ironic-dbcd8ff8b-j52fs\" (UID: \"8c984e40-d3ee-426e-ab51-c576bc699e11\") " pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.069256 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c984e40-d3ee-426e-ab51-c576bc699e11-config-data-custom\") pod \"ironic-dbcd8ff8b-j52fs\" (UID: \"8c984e40-d3ee-426e-ab51-c576bc699e11\") " pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.069277 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c984e40-d3ee-426e-ab51-c576bc699e11-logs\") pod \"ironic-dbcd8ff8b-j52fs\" (UID: \"8c984e40-d3ee-426e-ab51-c576bc699e11\") " pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.069293 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c984e40-d3ee-426e-ab51-c576bc699e11-public-tls-certs\") pod \"ironic-dbcd8ff8b-j52fs\" (UID: \"8c984e40-d3ee-426e-ab51-c576bc699e11\") " pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.069446 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c984e40-d3ee-426e-ab51-c576bc699e11-combined-ca-bundle\") pod \"ironic-dbcd8ff8b-j52fs\" (UID: \"8c984e40-d3ee-426e-ab51-c576bc699e11\") " pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.069676 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1ba82e4-cee0-4da9-a138-f860c8f1e274-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d1ba82e4-cee0-4da9-a138-f860c8f1e274" (UID: "d1ba82e4-cee0-4da9-a138-f860c8f1e274"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.069668 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af27799e-026b-4c74-83f1-1336db02850f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "af27799e-026b-4c74-83f1-1336db02850f" (UID: "af27799e-026b-4c74-83f1-1336db02850f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.074908 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1ba82e4-cee0-4da9-a138-f860c8f1e274-kube-api-access-k558z" (OuterVolumeSpecName: "kube-api-access-k558z") pod "d1ba82e4-cee0-4da9-a138-f860c8f1e274" (UID: "d1ba82e4-cee0-4da9-a138-f860c8f1e274"). InnerVolumeSpecName "kube-api-access-k558z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.075158 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af27799e-026b-4c74-83f1-1336db02850f-kube-api-access-fngts" (OuterVolumeSpecName: "kube-api-access-fngts") pod "af27799e-026b-4c74-83f1-1336db02850f" (UID: "af27799e-026b-4c74-83f1-1336db02850f"). InnerVolumeSpecName "kube-api-access-fngts". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.171349 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c984e40-d3ee-426e-ab51-c576bc699e11-config-data\") pod \"ironic-dbcd8ff8b-j52fs\" (UID: \"8c984e40-d3ee-426e-ab51-c576bc699e11\") " pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.171421 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/8c984e40-d3ee-426e-ab51-c576bc699e11-config-data-merged\") pod \"ironic-dbcd8ff8b-j52fs\" (UID: \"8c984e40-d3ee-426e-ab51-c576bc699e11\") " pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.171468 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n68m4\" (UniqueName: \"kubernetes.io/projected/8c984e40-d3ee-426e-ab51-c576bc699e11-kube-api-access-n68m4\") pod \"ironic-dbcd8ff8b-j52fs\" (UID: \"8c984e40-d3ee-426e-ab51-c576bc699e11\") " pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.171499 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c984e40-d3ee-426e-ab51-c576bc699e11-internal-tls-certs\") pod \"ironic-dbcd8ff8b-j52fs\" (UID: \"8c984e40-d3ee-426e-ab51-c576bc699e11\") " pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.171525 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/8c984e40-d3ee-426e-ab51-c576bc699e11-etc-podinfo\") pod \"ironic-dbcd8ff8b-j52fs\" (UID: \"8c984e40-d3ee-426e-ab51-c576bc699e11\") " pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.171550 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c984e40-d3ee-426e-ab51-c576bc699e11-scripts\") pod \"ironic-dbcd8ff8b-j52fs\" (UID: \"8c984e40-d3ee-426e-ab51-c576bc699e11\") " pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.171587 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c984e40-d3ee-426e-ab51-c576bc699e11-config-data-custom\") pod \"ironic-dbcd8ff8b-j52fs\" (UID: \"8c984e40-d3ee-426e-ab51-c576bc699e11\") " pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.171608 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c984e40-d3ee-426e-ab51-c576bc699e11-logs\") pod \"ironic-dbcd8ff8b-j52fs\" (UID: \"8c984e40-d3ee-426e-ab51-c576bc699e11\") " pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.171623 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c984e40-d3ee-426e-ab51-c576bc699e11-public-tls-certs\") pod \"ironic-dbcd8ff8b-j52fs\" (UID: \"8c984e40-d3ee-426e-ab51-c576bc699e11\") " pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.171650 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c984e40-d3ee-426e-ab51-c576bc699e11-combined-ca-bundle\") pod \"ironic-dbcd8ff8b-j52fs\" (UID: \"8c984e40-d3ee-426e-ab51-c576bc699e11\") " pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.171702 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fngts\" (UniqueName: \"kubernetes.io/projected/af27799e-026b-4c74-83f1-1336db02850f-kube-api-access-fngts\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.171712 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af27799e-026b-4c74-83f1-1336db02850f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.171721 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1ba82e4-cee0-4da9-a138-f860c8f1e274-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.171730 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k558z\" (UniqueName: \"kubernetes.io/projected/d1ba82e4-cee0-4da9-a138-f860c8f1e274-kube-api-access-k558z\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.171919 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/8c984e40-d3ee-426e-ab51-c576bc699e11-config-data-merged\") pod \"ironic-dbcd8ff8b-j52fs\" (UID: \"8c984e40-d3ee-426e-ab51-c576bc699e11\") " pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.172296 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c984e40-d3ee-426e-ab51-c576bc699e11-logs\") pod \"ironic-dbcd8ff8b-j52fs\" (UID: \"8c984e40-d3ee-426e-ab51-c576bc699e11\") " pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.178018 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c984e40-d3ee-426e-ab51-c576bc699e11-config-data-custom\") pod \"ironic-dbcd8ff8b-j52fs\" (UID: \"8c984e40-d3ee-426e-ab51-c576bc699e11\") " pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.178439 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c984e40-d3ee-426e-ab51-c576bc699e11-public-tls-certs\") pod \"ironic-dbcd8ff8b-j52fs\" (UID: \"8c984e40-d3ee-426e-ab51-c576bc699e11\") " pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.178588 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c984e40-d3ee-426e-ab51-c576bc699e11-combined-ca-bundle\") pod \"ironic-dbcd8ff8b-j52fs\" (UID: \"8c984e40-d3ee-426e-ab51-c576bc699e11\") " pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.178805 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c984e40-d3ee-426e-ab51-c576bc699e11-internal-tls-certs\") pod \"ironic-dbcd8ff8b-j52fs\" (UID: \"8c984e40-d3ee-426e-ab51-c576bc699e11\") " pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.183960 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c984e40-d3ee-426e-ab51-c576bc699e11-config-data\") pod \"ironic-dbcd8ff8b-j52fs\" (UID: \"8c984e40-d3ee-426e-ab51-c576bc699e11\") " pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.189550 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c984e40-d3ee-426e-ab51-c576bc699e11-scripts\") pod \"ironic-dbcd8ff8b-j52fs\" (UID: \"8c984e40-d3ee-426e-ab51-c576bc699e11\") " pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.203441 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/8c984e40-d3ee-426e-ab51-c576bc699e11-etc-podinfo\") pod \"ironic-dbcd8ff8b-j52fs\" (UID: \"8c984e40-d3ee-426e-ab51-c576bc699e11\") " pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.204370 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n68m4\" (UniqueName: \"kubernetes.io/projected/8c984e40-d3ee-426e-ab51-c576bc699e11-kube-api-access-n68m4\") pod \"ironic-dbcd8ff8b-j52fs\" (UID: \"8c984e40-d3ee-426e-ab51-c576bc699e11\") " pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.360875 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.490927 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-zrpxl" event={"ID":"af27799e-026b-4c74-83f1-1336db02850f","Type":"ContainerDied","Data":"c0e47d93381afd7972622ecbfa4e4500b5e2967309362dc329ff4fc283b40146"} Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.490989 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0e47d93381afd7972622ecbfa4e4500b5e2967309362dc329ff4fc283b40146" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.493136 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-c6a0-account-create-update-sfzxs" event={"ID":"d1ba82e4-cee0-4da9-a138-f860c8f1e274","Type":"ContainerDied","Data":"cc5d338b5a03d6e97105ca1d1ad92ca433103036964e24c38b7cabb9bbb8e360"} Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.493158 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-zrpxl" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.493248 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-c6a0-account-create-update-sfzxs" Dec 10 12:35:47 crc kubenswrapper[4689]: I1210 12:35:47.493169 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc5d338b5a03d6e97105ca1d1ad92ca433103036964e24c38b7cabb9bbb8e360" Dec 10 12:35:50 crc kubenswrapper[4689]: I1210 12:35:50.186273 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5784cf869f-qlcgw" Dec 10 12:35:50 crc kubenswrapper[4689]: I1210 12:35:50.258469 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-x4qc2"] Dec 10 12:35:50 crc kubenswrapper[4689]: I1210 12:35:50.259017 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75c8ddd69c-x4qc2" podUID="1d2effe7-e7d4-417b-8446-6466eda6c94c" containerName="dnsmasq-dns" containerID="cri-o://696d38016f7fdcb0e0495c2becf2de963a862979fa2db40147053961d1bdb384" gracePeriod=10 Dec 10 12:35:50 crc kubenswrapper[4689]: I1210 12:35:50.419541 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 10 12:35:50 crc kubenswrapper[4689]: I1210 12:35:50.460418 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 12:35:50 crc kubenswrapper[4689]: I1210 12:35:50.534570 4689 generic.go:334] "Generic (PLEG): container finished" podID="1d2effe7-e7d4-417b-8446-6466eda6c94c" containerID="696d38016f7fdcb0e0495c2becf2de963a862979fa2db40147053961d1bdb384" exitCode=0 Dec 10 12:35:50 crc kubenswrapper[4689]: I1210 12:35:50.534655 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-x4qc2" event={"ID":"1d2effe7-e7d4-417b-8446-6466eda6c94c","Type":"ContainerDied","Data":"696d38016f7fdcb0e0495c2becf2de963a862979fa2db40147053961d1bdb384"} Dec 10 12:35:50 crc kubenswrapper[4689]: I1210 12:35:50.534773 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="82df75b3-4559-47c4-bc1a-25db8a50e8bf" containerName="cinder-scheduler" containerID="cri-o://87a779a65ac7cc1b3b571e5eb66b3944ad7307e58627bcc58c0a24808135052c" gracePeriod=30 Dec 10 12:35:50 crc kubenswrapper[4689]: I1210 12:35:50.534856 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="82df75b3-4559-47c4-bc1a-25db8a50e8bf" containerName="probe" containerID="cri-o://1daa894f6e120e11933b91a544ebd3f573cd492f883b879daf51088fcdbe7b65" gracePeriod=30 Dec 10 12:35:50 crc kubenswrapper[4689]: I1210 12:35:50.748142 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-74c576f5cb-ljzfq" Dec 10 12:35:50 crc kubenswrapper[4689]: I1210 12:35:50.851599 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-74c576f5cb-ljzfq" Dec 10 12:35:50 crc kubenswrapper[4689]: I1210 12:35:50.909783 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7b98b8dd66-xfv7n"] Dec 10 12:35:50 crc kubenswrapper[4689]: I1210 12:35:50.910053 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7b98b8dd66-xfv7n" podUID="26140423-1fa8-498b-b1da-487de5d0635f" containerName="barbican-api-log" containerID="cri-o://92bfe216c34d0444743b1a4d4ac2c63fc1c79aee442fc3fb497aac04e0c193f6" gracePeriod=30 Dec 10 12:35:50 crc kubenswrapper[4689]: I1210 12:35:50.910182 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7b98b8dd66-xfv7n" podUID="26140423-1fa8-498b-b1da-487de5d0635f" containerName="barbican-api" containerID="cri-o://2b112f11d8b5b0a0943a19cfd5aaec6931880a4b031de63c920685a7efebd132" gracePeriod=30 Dec 10 12:35:51 crc kubenswrapper[4689]: I1210 12:35:51.547224 4689 generic.go:334] "Generic (PLEG): container finished" podID="26140423-1fa8-498b-b1da-487de5d0635f" containerID="92bfe216c34d0444743b1a4d4ac2c63fc1c79aee442fc3fb497aac04e0c193f6" exitCode=143 Dec 10 12:35:51 crc kubenswrapper[4689]: I1210 12:35:51.547308 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b98b8dd66-xfv7n" event={"ID":"26140423-1fa8-498b-b1da-487de5d0635f","Type":"ContainerDied","Data":"92bfe216c34d0444743b1a4d4ac2c63fc1c79aee442fc3fb497aac04e0c193f6"} Dec 10 12:35:51 crc kubenswrapper[4689]: I1210 12:35:51.550420 4689 generic.go:334] "Generic (PLEG): container finished" podID="82df75b3-4559-47c4-bc1a-25db8a50e8bf" containerID="1daa894f6e120e11933b91a544ebd3f573cd492f883b879daf51088fcdbe7b65" exitCode=0 Dec 10 12:35:51 crc kubenswrapper[4689]: I1210 12:35:51.550440 4689 generic.go:334] "Generic (PLEG): container finished" podID="82df75b3-4559-47c4-bc1a-25db8a50e8bf" containerID="87a779a65ac7cc1b3b571e5eb66b3944ad7307e58627bcc58c0a24808135052c" exitCode=0 Dec 10 12:35:51 crc kubenswrapper[4689]: I1210 12:35:51.551151 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"82df75b3-4559-47c4-bc1a-25db8a50e8bf","Type":"ContainerDied","Data":"1daa894f6e120e11933b91a544ebd3f573cd492f883b879daf51088fcdbe7b65"} Dec 10 12:35:51 crc kubenswrapper[4689]: I1210 12:35:51.551199 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"82df75b3-4559-47c4-bc1a-25db8a50e8bf","Type":"ContainerDied","Data":"87a779a65ac7cc1b3b571e5eb66b3944ad7307e58627bcc58c0a24808135052c"} Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.096543 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.180474 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/537ef4c8-28a0-48c0-8392-d7d6252670f2-scripts\") pod \"537ef4c8-28a0-48c0-8392-d7d6252670f2\" (UID: \"537ef4c8-28a0-48c0-8392-d7d6252670f2\") " Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.180589 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/537ef4c8-28a0-48c0-8392-d7d6252670f2-logs\") pod \"537ef4c8-28a0-48c0-8392-d7d6252670f2\" (UID: \"537ef4c8-28a0-48c0-8392-d7d6252670f2\") " Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.180637 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/537ef4c8-28a0-48c0-8392-d7d6252670f2-config-data\") pod \"537ef4c8-28a0-48c0-8392-d7d6252670f2\" (UID: \"537ef4c8-28a0-48c0-8392-d7d6252670f2\") " Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.180706 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/537ef4c8-28a0-48c0-8392-d7d6252670f2-config-data-custom\") pod \"537ef4c8-28a0-48c0-8392-d7d6252670f2\" (UID: \"537ef4c8-28a0-48c0-8392-d7d6252670f2\") " Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.180761 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/537ef4c8-28a0-48c0-8392-d7d6252670f2-etc-machine-id\") pod \"537ef4c8-28a0-48c0-8392-d7d6252670f2\" (UID: \"537ef4c8-28a0-48c0-8392-d7d6252670f2\") " Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.180815 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8qbg\" (UniqueName: \"kubernetes.io/projected/537ef4c8-28a0-48c0-8392-d7d6252670f2-kube-api-access-g8qbg\") pod \"537ef4c8-28a0-48c0-8392-d7d6252670f2\" (UID: \"537ef4c8-28a0-48c0-8392-d7d6252670f2\") " Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.180906 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537ef4c8-28a0-48c0-8392-d7d6252670f2-combined-ca-bundle\") pod \"537ef4c8-28a0-48c0-8392-d7d6252670f2\" (UID: \"537ef4c8-28a0-48c0-8392-d7d6252670f2\") " Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.181883 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/537ef4c8-28a0-48c0-8392-d7d6252670f2-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "537ef4c8-28a0-48c0-8392-d7d6252670f2" (UID: "537ef4c8-28a0-48c0-8392-d7d6252670f2"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.193281 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/537ef4c8-28a0-48c0-8392-d7d6252670f2-logs" (OuterVolumeSpecName: "logs") pod "537ef4c8-28a0-48c0-8392-d7d6252670f2" (UID: "537ef4c8-28a0-48c0-8392-d7d6252670f2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.198196 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/537ef4c8-28a0-48c0-8392-d7d6252670f2-scripts" (OuterVolumeSpecName: "scripts") pod "537ef4c8-28a0-48c0-8392-d7d6252670f2" (UID: "537ef4c8-28a0-48c0-8392-d7d6252670f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.199363 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/537ef4c8-28a0-48c0-8392-d7d6252670f2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "537ef4c8-28a0-48c0-8392-d7d6252670f2" (UID: "537ef4c8-28a0-48c0-8392-d7d6252670f2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.204176 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/537ef4c8-28a0-48c0-8392-d7d6252670f2-kube-api-access-g8qbg" (OuterVolumeSpecName: "kube-api-access-g8qbg") pod "537ef4c8-28a0-48c0-8392-d7d6252670f2" (UID: "537ef4c8-28a0-48c0-8392-d7d6252670f2"). InnerVolumeSpecName "kube-api-access-g8qbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.297354 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/537ef4c8-28a0-48c0-8392-d7d6252670f2-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.301042 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/537ef4c8-28a0-48c0-8392-d7d6252670f2-logs\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.301079 4689 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/537ef4c8-28a0-48c0-8392-d7d6252670f2-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.301104 4689 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/537ef4c8-28a0-48c0-8392-d7d6252670f2-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.301114 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8qbg\" (UniqueName: \"kubernetes.io/projected/537ef4c8-28a0-48c0-8392-d7d6252670f2-kube-api-access-g8qbg\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.382962 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/537ef4c8-28a0-48c0-8392-d7d6252670f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "537ef4c8-28a0-48c0-8392-d7d6252670f2" (UID: "537ef4c8-28a0-48c0-8392-d7d6252670f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.397415 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/537ef4c8-28a0-48c0-8392-d7d6252670f2-config-data" (OuterVolumeSpecName: "config-data") pod "537ef4c8-28a0-48c0-8392-d7d6252670f2" (UID: "537ef4c8-28a0-48c0-8392-d7d6252670f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.403303 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/537ef4c8-28a0-48c0-8392-d7d6252670f2-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.403343 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537ef4c8-28a0-48c0-8392-d7d6252670f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.575419 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"537ef4c8-28a0-48c0-8392-d7d6252670f2","Type":"ContainerDied","Data":"2f65197d460bb93c7a7c189d8e788425480d588db8c12131ca7aa6ac26fab2cd"} Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.575473 4689 scope.go:117] "RemoveContainer" containerID="702633101c435ac4df049fd07176f7b2010bb13c8f7d44eebdf6d8fcac7fcf20" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.575587 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.611303 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.632504 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.641867 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 10 12:35:52 crc kubenswrapper[4689]: E1210 12:35:52.642332 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="537ef4c8-28a0-48c0-8392-d7d6252670f2" containerName="cinder-api" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.642350 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="537ef4c8-28a0-48c0-8392-d7d6252670f2" containerName="cinder-api" Dec 10 12:35:52 crc kubenswrapper[4689]: E1210 12:35:52.642386 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="537ef4c8-28a0-48c0-8392-d7d6252670f2" containerName="cinder-api-log" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.642392 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="537ef4c8-28a0-48c0-8392-d7d6252670f2" containerName="cinder-api-log" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.642592 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="537ef4c8-28a0-48c0-8392-d7d6252670f2" containerName="cinder-api" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.642611 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="537ef4c8-28a0-48c0-8392-d7d6252670f2" containerName="cinder-api-log" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.644154 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.652856 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.653031 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.653140 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.656413 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.711633 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3033a7b9-9374-47c4-89a2-188204ccd941-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3033a7b9-9374-47c4-89a2-188204ccd941\") " pod="openstack/cinder-api-0" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.711697 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3033a7b9-9374-47c4-89a2-188204ccd941-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3033a7b9-9374-47c4-89a2-188204ccd941\") " pod="openstack/cinder-api-0" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.711715 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3033a7b9-9374-47c4-89a2-188204ccd941-logs\") pod \"cinder-api-0\" (UID: \"3033a7b9-9374-47c4-89a2-188204ccd941\") " pod="openstack/cinder-api-0" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.711779 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3033a7b9-9374-47c4-89a2-188204ccd941-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3033a7b9-9374-47c4-89a2-188204ccd941\") " pod="openstack/cinder-api-0" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.711831 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk7hh\" (UniqueName: \"kubernetes.io/projected/3033a7b9-9374-47c4-89a2-188204ccd941-kube-api-access-kk7hh\") pod \"cinder-api-0\" (UID: \"3033a7b9-9374-47c4-89a2-188204ccd941\") " pod="openstack/cinder-api-0" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.711877 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3033a7b9-9374-47c4-89a2-188204ccd941-scripts\") pod \"cinder-api-0\" (UID: \"3033a7b9-9374-47c4-89a2-188204ccd941\") " pod="openstack/cinder-api-0" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.711937 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3033a7b9-9374-47c4-89a2-188204ccd941-config-data\") pod \"cinder-api-0\" (UID: \"3033a7b9-9374-47c4-89a2-188204ccd941\") " pod="openstack/cinder-api-0" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.711965 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3033a7b9-9374-47c4-89a2-188204ccd941-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3033a7b9-9374-47c4-89a2-188204ccd941\") " pod="openstack/cinder-api-0" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.712010 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3033a7b9-9374-47c4-89a2-188204ccd941-config-data-custom\") pod \"cinder-api-0\" (UID: \"3033a7b9-9374-47c4-89a2-188204ccd941\") " pod="openstack/cinder-api-0" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.813852 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3033a7b9-9374-47c4-89a2-188204ccd941-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3033a7b9-9374-47c4-89a2-188204ccd941\") " pod="openstack/cinder-api-0" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.813909 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3033a7b9-9374-47c4-89a2-188204ccd941-config-data-custom\") pod \"cinder-api-0\" (UID: \"3033a7b9-9374-47c4-89a2-188204ccd941\") " pod="openstack/cinder-api-0" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.813952 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3033a7b9-9374-47c4-89a2-188204ccd941-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3033a7b9-9374-47c4-89a2-188204ccd941\") " pod="openstack/cinder-api-0" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.813993 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3033a7b9-9374-47c4-89a2-188204ccd941-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3033a7b9-9374-47c4-89a2-188204ccd941\") " pod="openstack/cinder-api-0" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.814016 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3033a7b9-9374-47c4-89a2-188204ccd941-logs\") pod \"cinder-api-0\" (UID: \"3033a7b9-9374-47c4-89a2-188204ccd941\") " pod="openstack/cinder-api-0" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.814064 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3033a7b9-9374-47c4-89a2-188204ccd941-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3033a7b9-9374-47c4-89a2-188204ccd941\") " pod="openstack/cinder-api-0" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.814130 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk7hh\" (UniqueName: \"kubernetes.io/projected/3033a7b9-9374-47c4-89a2-188204ccd941-kube-api-access-kk7hh\") pod \"cinder-api-0\" (UID: \"3033a7b9-9374-47c4-89a2-188204ccd941\") " pod="openstack/cinder-api-0" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.814155 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3033a7b9-9374-47c4-89a2-188204ccd941-scripts\") pod \"cinder-api-0\" (UID: \"3033a7b9-9374-47c4-89a2-188204ccd941\") " pod="openstack/cinder-api-0" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.814241 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3033a7b9-9374-47c4-89a2-188204ccd941-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3033a7b9-9374-47c4-89a2-188204ccd941\") " pod="openstack/cinder-api-0" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.815819 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3033a7b9-9374-47c4-89a2-188204ccd941-logs\") pod \"cinder-api-0\" (UID: \"3033a7b9-9374-47c4-89a2-188204ccd941\") " pod="openstack/cinder-api-0" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.815916 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3033a7b9-9374-47c4-89a2-188204ccd941-config-data\") pod \"cinder-api-0\" (UID: \"3033a7b9-9374-47c4-89a2-188204ccd941\") " pod="openstack/cinder-api-0" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.822870 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3033a7b9-9374-47c4-89a2-188204ccd941-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3033a7b9-9374-47c4-89a2-188204ccd941\") " pod="openstack/cinder-api-0" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.824637 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3033a7b9-9374-47c4-89a2-188204ccd941-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3033a7b9-9374-47c4-89a2-188204ccd941\") " pod="openstack/cinder-api-0" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.825583 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3033a7b9-9374-47c4-89a2-188204ccd941-config-data-custom\") pod \"cinder-api-0\" (UID: \"3033a7b9-9374-47c4-89a2-188204ccd941\") " pod="openstack/cinder-api-0" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.826140 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3033a7b9-9374-47c4-89a2-188204ccd941-config-data\") pod \"cinder-api-0\" (UID: \"3033a7b9-9374-47c4-89a2-188204ccd941\") " pod="openstack/cinder-api-0" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.826403 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3033a7b9-9374-47c4-89a2-188204ccd941-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3033a7b9-9374-47c4-89a2-188204ccd941\") " pod="openstack/cinder-api-0" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.826493 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3033a7b9-9374-47c4-89a2-188204ccd941-scripts\") pod \"cinder-api-0\" (UID: \"3033a7b9-9374-47c4-89a2-188204ccd941\") " pod="openstack/cinder-api-0" Dec 10 12:35:52 crc kubenswrapper[4689]: I1210 12:35:52.831403 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk7hh\" (UniqueName: \"kubernetes.io/projected/3033a7b9-9374-47c4-89a2-188204ccd941-kube-api-access-kk7hh\") pod \"cinder-api-0\" (UID: \"3033a7b9-9374-47c4-89a2-188204ccd941\") " pod="openstack/cinder-api-0" Dec 10 12:35:53 crc kubenswrapper[4689]: I1210 12:35:53.013440 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 10 12:35:54 crc kubenswrapper[4689]: I1210 12:35:54.065258 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7b98b8dd66-xfv7n" podUID="26140423-1fa8-498b-b1da-487de5d0635f" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": read tcp 10.217.0.2:38300->10.217.0.156:9311: read: connection reset by peer" Dec 10 12:35:54 crc kubenswrapper[4689]: I1210 12:35:54.065386 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7b98b8dd66-xfv7n" podUID="26140423-1fa8-498b-b1da-487de5d0635f" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": read tcp 10.217.0.2:38310->10.217.0.156:9311: read: connection reset by peer" Dec 10 12:35:54 crc kubenswrapper[4689]: I1210 12:35:54.519329 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="537ef4c8-28a0-48c0-8392-d7d6252670f2" path="/var/lib/kubelet/pods/537ef4c8-28a0-48c0-8392-d7d6252670f2/volumes" Dec 10 12:35:54 crc kubenswrapper[4689]: I1210 12:35:54.603217 4689 generic.go:334] "Generic (PLEG): container finished" podID="26140423-1fa8-498b-b1da-487de5d0635f" containerID="2b112f11d8b5b0a0943a19cfd5aaec6931880a4b031de63c920685a7efebd132" exitCode=0 Dec 10 12:35:54 crc kubenswrapper[4689]: I1210 12:35:54.603264 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b98b8dd66-xfv7n" event={"ID":"26140423-1fa8-498b-b1da-487de5d0635f","Type":"ContainerDied","Data":"2b112f11d8b5b0a0943a19cfd5aaec6931880a4b031de63c920685a7efebd132"} Dec 10 12:35:55 crc kubenswrapper[4689]: I1210 12:35:55.562736 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-sync-9zldg"] Dec 10 12:35:55 crc kubenswrapper[4689]: I1210 12:35:55.564460 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-9zldg" Dec 10 12:35:55 crc kubenswrapper[4689]: I1210 12:35:55.570010 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Dec 10 12:35:55 crc kubenswrapper[4689]: I1210 12:35:55.570561 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Dec 10 12:35:55 crc kubenswrapper[4689]: I1210 12:35:55.596038 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-9zldg"] Dec 10 12:35:55 crc kubenswrapper[4689]: I1210 12:35:55.667595 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-config\") pod \"ironic-inspector-db-sync-9zldg\" (UID: \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\") " pod="openstack/ironic-inspector-db-sync-9zldg" Dec 10 12:35:55 crc kubenswrapper[4689]: I1210 12:35:55.667661 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-scripts\") pod \"ironic-inspector-db-sync-9zldg\" (UID: \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\") " pod="openstack/ironic-inspector-db-sync-9zldg" Dec 10 12:35:55 crc kubenswrapper[4689]: I1210 12:35:55.667714 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-var-lib-ironic\") pod \"ironic-inspector-db-sync-9zldg\" (UID: \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\") " pod="openstack/ironic-inspector-db-sync-9zldg" Dec 10 12:35:55 crc kubenswrapper[4689]: I1210 12:35:55.667799 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtct5\" (UniqueName: \"kubernetes.io/projected/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-kube-api-access-qtct5\") pod \"ironic-inspector-db-sync-9zldg\" (UID: \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\") " pod="openstack/ironic-inspector-db-sync-9zldg" Dec 10 12:35:55 crc kubenswrapper[4689]: I1210 12:35:55.667827 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-9zldg\" (UID: \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\") " pod="openstack/ironic-inspector-db-sync-9zldg" Dec 10 12:35:55 crc kubenswrapper[4689]: I1210 12:35:55.667844 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-etc-podinfo\") pod \"ironic-inspector-db-sync-9zldg\" (UID: \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\") " pod="openstack/ironic-inspector-db-sync-9zldg" Dec 10 12:35:55 crc kubenswrapper[4689]: I1210 12:35:55.667878 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-combined-ca-bundle\") pod \"ironic-inspector-db-sync-9zldg\" (UID: \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\") " pod="openstack/ironic-inspector-db-sync-9zldg" Dec 10 12:35:55 crc kubenswrapper[4689]: I1210 12:35:55.725548 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="537ef4c8-28a0-48c0-8392-d7d6252670f2" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.161:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 12:35:55 crc kubenswrapper[4689]: I1210 12:35:55.769172 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-9zldg\" (UID: \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\") " pod="openstack/ironic-inspector-db-sync-9zldg" Dec 10 12:35:55 crc kubenswrapper[4689]: I1210 12:35:55.769216 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-etc-podinfo\") pod \"ironic-inspector-db-sync-9zldg\" (UID: \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\") " pod="openstack/ironic-inspector-db-sync-9zldg" Dec 10 12:35:55 crc kubenswrapper[4689]: I1210 12:35:55.769259 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-combined-ca-bundle\") pod \"ironic-inspector-db-sync-9zldg\" (UID: \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\") " pod="openstack/ironic-inspector-db-sync-9zldg" Dec 10 12:35:55 crc kubenswrapper[4689]: I1210 12:35:55.769303 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-config\") pod \"ironic-inspector-db-sync-9zldg\" (UID: \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\") " pod="openstack/ironic-inspector-db-sync-9zldg" Dec 10 12:35:55 crc kubenswrapper[4689]: I1210 12:35:55.769343 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-scripts\") pod \"ironic-inspector-db-sync-9zldg\" (UID: \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\") " pod="openstack/ironic-inspector-db-sync-9zldg" Dec 10 12:35:55 crc kubenswrapper[4689]: I1210 12:35:55.769373 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-var-lib-ironic\") pod \"ironic-inspector-db-sync-9zldg\" (UID: \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\") " pod="openstack/ironic-inspector-db-sync-9zldg" Dec 10 12:35:55 crc kubenswrapper[4689]: I1210 12:35:55.769437 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtct5\" (UniqueName: \"kubernetes.io/projected/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-kube-api-access-qtct5\") pod \"ironic-inspector-db-sync-9zldg\" (UID: \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\") " pod="openstack/ironic-inspector-db-sync-9zldg" Dec 10 12:35:55 crc kubenswrapper[4689]: I1210 12:35:55.770085 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-9zldg\" (UID: \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\") " pod="openstack/ironic-inspector-db-sync-9zldg" Dec 10 12:35:55 crc kubenswrapper[4689]: I1210 12:35:55.771163 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-var-lib-ironic\") pod \"ironic-inspector-db-sync-9zldg\" (UID: \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\") " pod="openstack/ironic-inspector-db-sync-9zldg" Dec 10 12:35:55 crc kubenswrapper[4689]: I1210 12:35:55.776608 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-scripts\") pod \"ironic-inspector-db-sync-9zldg\" (UID: \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\") " pod="openstack/ironic-inspector-db-sync-9zldg" Dec 10 12:35:55 crc kubenswrapper[4689]: I1210 12:35:55.776826 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-combined-ca-bundle\") pod \"ironic-inspector-db-sync-9zldg\" (UID: \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\") " pod="openstack/ironic-inspector-db-sync-9zldg" Dec 10 12:35:55 crc kubenswrapper[4689]: I1210 12:35:55.778251 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-etc-podinfo\") pod \"ironic-inspector-db-sync-9zldg\" (UID: \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\") " pod="openstack/ironic-inspector-db-sync-9zldg" Dec 10 12:35:55 crc kubenswrapper[4689]: I1210 12:35:55.779651 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-config\") pod \"ironic-inspector-db-sync-9zldg\" (UID: \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\") " pod="openstack/ironic-inspector-db-sync-9zldg" Dec 10 12:35:55 crc kubenswrapper[4689]: I1210 12:35:55.791547 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtct5\" (UniqueName: \"kubernetes.io/projected/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-kube-api-access-qtct5\") pod \"ironic-inspector-db-sync-9zldg\" (UID: \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\") " pod="openstack/ironic-inspector-db-sync-9zldg" Dec 10 12:35:55 crc kubenswrapper[4689]: I1210 12:35:55.890610 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-9zldg" Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.398533 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-x4qc2" Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.400011 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.491006 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d2effe7-e7d4-417b-8446-6466eda6c94c-dns-swift-storage-0\") pod \"1d2effe7-e7d4-417b-8446-6466eda6c94c\" (UID: \"1d2effe7-e7d4-417b-8446-6466eda6c94c\") " Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.491078 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82df75b3-4559-47c4-bc1a-25db8a50e8bf-combined-ca-bundle\") pod \"82df75b3-4559-47c4-bc1a-25db8a50e8bf\" (UID: \"82df75b3-4559-47c4-bc1a-25db8a50e8bf\") " Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.491105 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82df75b3-4559-47c4-bc1a-25db8a50e8bf-etc-machine-id\") pod \"82df75b3-4559-47c4-bc1a-25db8a50e8bf\" (UID: \"82df75b3-4559-47c4-bc1a-25db8a50e8bf\") " Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.491181 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mqgq\" (UniqueName: \"kubernetes.io/projected/1d2effe7-e7d4-417b-8446-6466eda6c94c-kube-api-access-6mqgq\") pod \"1d2effe7-e7d4-417b-8446-6466eda6c94c\" (UID: \"1d2effe7-e7d4-417b-8446-6466eda6c94c\") " Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.491206 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82df75b3-4559-47c4-bc1a-25db8a50e8bf-config-data\") pod \"82df75b3-4559-47c4-bc1a-25db8a50e8bf\" (UID: \"82df75b3-4559-47c4-bc1a-25db8a50e8bf\") " Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.491274 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d2effe7-e7d4-417b-8446-6466eda6c94c-ovsdbserver-nb\") pod \"1d2effe7-e7d4-417b-8446-6466eda6c94c\" (UID: \"1d2effe7-e7d4-417b-8446-6466eda6c94c\") " Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.491333 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d2effe7-e7d4-417b-8446-6466eda6c94c-ovsdbserver-sb\") pod \"1d2effe7-e7d4-417b-8446-6466eda6c94c\" (UID: \"1d2effe7-e7d4-417b-8446-6466eda6c94c\") " Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.491368 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d2effe7-e7d4-417b-8446-6466eda6c94c-dns-svc\") pod \"1d2effe7-e7d4-417b-8446-6466eda6c94c\" (UID: \"1d2effe7-e7d4-417b-8446-6466eda6c94c\") " Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.491389 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82df75b3-4559-47c4-bc1a-25db8a50e8bf-scripts\") pod \"82df75b3-4559-47c4-bc1a-25db8a50e8bf\" (UID: \"82df75b3-4559-47c4-bc1a-25db8a50e8bf\") " Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.491405 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82df75b3-4559-47c4-bc1a-25db8a50e8bf-config-data-custom\") pod \"82df75b3-4559-47c4-bc1a-25db8a50e8bf\" (UID: \"82df75b3-4559-47c4-bc1a-25db8a50e8bf\") " Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.491434 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d2effe7-e7d4-417b-8446-6466eda6c94c-config\") pod \"1d2effe7-e7d4-417b-8446-6466eda6c94c\" (UID: \"1d2effe7-e7d4-417b-8446-6466eda6c94c\") " Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.491453 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcwj5\" (UniqueName: \"kubernetes.io/projected/82df75b3-4559-47c4-bc1a-25db8a50e8bf-kube-api-access-bcwj5\") pod \"82df75b3-4559-47c4-bc1a-25db8a50e8bf\" (UID: \"82df75b3-4559-47c4-bc1a-25db8a50e8bf\") " Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.493037 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82df75b3-4559-47c4-bc1a-25db8a50e8bf-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "82df75b3-4559-47c4-bc1a-25db8a50e8bf" (UID: "82df75b3-4559-47c4-bc1a-25db8a50e8bf"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.514510 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82df75b3-4559-47c4-bc1a-25db8a50e8bf-scripts" (OuterVolumeSpecName: "scripts") pod "82df75b3-4559-47c4-bc1a-25db8a50e8bf" (UID: "82df75b3-4559-47c4-bc1a-25db8a50e8bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.526570 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82df75b3-4559-47c4-bc1a-25db8a50e8bf-kube-api-access-bcwj5" (OuterVolumeSpecName: "kube-api-access-bcwj5") pod "82df75b3-4559-47c4-bc1a-25db8a50e8bf" (UID: "82df75b3-4559-47c4-bc1a-25db8a50e8bf"). InnerVolumeSpecName "kube-api-access-bcwj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.548399 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d2effe7-e7d4-417b-8446-6466eda6c94c-kube-api-access-6mqgq" (OuterVolumeSpecName: "kube-api-access-6mqgq") pod "1d2effe7-e7d4-417b-8446-6466eda6c94c" (UID: "1d2effe7-e7d4-417b-8446-6466eda6c94c"). InnerVolumeSpecName "kube-api-access-6mqgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.548506 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82df75b3-4559-47c4-bc1a-25db8a50e8bf-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "82df75b3-4559-47c4-bc1a-25db8a50e8bf" (UID: "82df75b3-4559-47c4-bc1a-25db8a50e8bf"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.581550 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-85bc68c5bb-jxqf7" Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.594002 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcwj5\" (UniqueName: \"kubernetes.io/projected/82df75b3-4559-47c4-bc1a-25db8a50e8bf-kube-api-access-bcwj5\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.594063 4689 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82df75b3-4559-47c4-bc1a-25db8a50e8bf-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.594072 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mqgq\" (UniqueName: \"kubernetes.io/projected/1d2effe7-e7d4-417b-8446-6466eda6c94c-kube-api-access-6mqgq\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.594081 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82df75b3-4559-47c4-bc1a-25db8a50e8bf-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.594090 4689 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82df75b3-4559-47c4-bc1a-25db8a50e8bf-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.595079 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82df75b3-4559-47c4-bc1a-25db8a50e8bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82df75b3-4559-47c4-bc1a-25db8a50e8bf" (UID: "82df75b3-4559-47c4-bc1a-25db8a50e8bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.627446 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d2effe7-e7d4-417b-8446-6466eda6c94c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1d2effe7-e7d4-417b-8446-6466eda6c94c" (UID: "1d2effe7-e7d4-417b-8446-6466eda6c94c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.634167 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"82df75b3-4559-47c4-bc1a-25db8a50e8bf","Type":"ContainerDied","Data":"e68260f12d21de1d6d95efa31bffbfc36f2e8cc911fc1bcdba7a7a673816ef8a"} Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.634251 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.641913 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-x4qc2" event={"ID":"1d2effe7-e7d4-417b-8446-6466eda6c94c","Type":"ContainerDied","Data":"4cc211f596daa380c75ba05dabe2184b6824a842e7899ec0a75141f9f88de2e4"} Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.641996 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-x4qc2" Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.647408 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d2effe7-e7d4-417b-8446-6466eda6c94c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1d2effe7-e7d4-417b-8446-6466eda6c94c" (UID: "1d2effe7-e7d4-417b-8446-6466eda6c94c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.664582 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d2effe7-e7d4-417b-8446-6466eda6c94c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1d2effe7-e7d4-417b-8446-6466eda6c94c" (UID: "1d2effe7-e7d4-417b-8446-6466eda6c94c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.679287 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d2effe7-e7d4-417b-8446-6466eda6c94c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1d2effe7-e7d4-417b-8446-6466eda6c94c" (UID: "1d2effe7-e7d4-417b-8446-6466eda6c94c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.687483 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d2effe7-e7d4-417b-8446-6466eda6c94c-config" (OuterVolumeSpecName: "config") pod "1d2effe7-e7d4-417b-8446-6466eda6c94c" (UID: "1d2effe7-e7d4-417b-8446-6466eda6c94c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.697397 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82df75b3-4559-47c4-bc1a-25db8a50e8bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.697433 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d2effe7-e7d4-417b-8446-6466eda6c94c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.697443 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d2effe7-e7d4-417b-8446-6466eda6c94c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.697455 4689 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d2effe7-e7d4-417b-8446-6466eda6c94c-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.697465 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d2effe7-e7d4-417b-8446-6466eda6c94c-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.697473 4689 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d2effe7-e7d4-417b-8446-6466eda6c94c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.698558 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82df75b3-4559-47c4-bc1a-25db8a50e8bf-config-data" (OuterVolumeSpecName: "config-data") pod "82df75b3-4559-47c4-bc1a-25db8a50e8bf" (UID: "82df75b3-4559-47c4-bc1a-25db8a50e8bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:56 crc kubenswrapper[4689]: E1210 12:35:56.784092 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Dec 10 12:35:56 crc kubenswrapper[4689]: E1210 12:35:56.784231 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hvxgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(1dfb9cf4-7f75-40c2-ade2-81a91be5ad12): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 12:35:56 crc kubenswrapper[4689]: E1210 12:35:56.789169 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="1dfb9cf4-7f75-40c2-ade2-81a91be5ad12" Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.799861 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82df75b3-4559-47c4-bc1a-25db8a50e8bf-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:56 crc kubenswrapper[4689]: I1210 12:35:56.996786 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.020364 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.054409 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-x4qc2"] Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.065802 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 12:35:57 crc kubenswrapper[4689]: E1210 12:35:57.066224 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82df75b3-4559-47c4-bc1a-25db8a50e8bf" containerName="cinder-scheduler" Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.066253 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="82df75b3-4559-47c4-bc1a-25db8a50e8bf" containerName="cinder-scheduler" Dec 10 12:35:57 crc kubenswrapper[4689]: E1210 12:35:57.066280 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d2effe7-e7d4-417b-8446-6466eda6c94c" containerName="init" Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.066287 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d2effe7-e7d4-417b-8446-6466eda6c94c" containerName="init" Dec 10 12:35:57 crc kubenswrapper[4689]: E1210 12:35:57.066320 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d2effe7-e7d4-417b-8446-6466eda6c94c" containerName="dnsmasq-dns" Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.066327 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d2effe7-e7d4-417b-8446-6466eda6c94c" containerName="dnsmasq-dns" Dec 10 12:35:57 crc kubenswrapper[4689]: E1210 12:35:57.066342 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82df75b3-4559-47c4-bc1a-25db8a50e8bf" containerName="probe" Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.066349 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="82df75b3-4559-47c4-bc1a-25db8a50e8bf" containerName="probe" Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.066575 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="82df75b3-4559-47c4-bc1a-25db8a50e8bf" containerName="probe" Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.066601 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d2effe7-e7d4-417b-8446-6466eda6c94c" containerName="dnsmasq-dns" Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.066628 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="82df75b3-4559-47c4-bc1a-25db8a50e8bf" containerName="cinder-scheduler" Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.067649 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.069411 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.096523 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-x4qc2"] Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.107148 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm86s\" (UniqueName: \"kubernetes.io/projected/8e8f74b8-b74a-42eb-97ec-28680f9999a4-kube-api-access-tm86s\") pod \"cinder-scheduler-0\" (UID: \"8e8f74b8-b74a-42eb-97ec-28680f9999a4\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.107211 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e8f74b8-b74a-42eb-97ec-28680f9999a4-scripts\") pod \"cinder-scheduler-0\" (UID: \"8e8f74b8-b74a-42eb-97ec-28680f9999a4\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.107237 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e8f74b8-b74a-42eb-97ec-28680f9999a4-config-data\") pod \"cinder-scheduler-0\" (UID: \"8e8f74b8-b74a-42eb-97ec-28680f9999a4\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.107256 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e8f74b8-b74a-42eb-97ec-28680f9999a4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8e8f74b8-b74a-42eb-97ec-28680f9999a4\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.107284 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e8f74b8-b74a-42eb-97ec-28680f9999a4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8e8f74b8-b74a-42eb-97ec-28680f9999a4\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.107323 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e8f74b8-b74a-42eb-97ec-28680f9999a4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8e8f74b8-b74a-42eb-97ec-28680f9999a4\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.113852 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.208699 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e8f74b8-b74a-42eb-97ec-28680f9999a4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8e8f74b8-b74a-42eb-97ec-28680f9999a4\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.209496 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e8f74b8-b74a-42eb-97ec-28680f9999a4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8e8f74b8-b74a-42eb-97ec-28680f9999a4\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.209734 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm86s\" (UniqueName: \"kubernetes.io/projected/8e8f74b8-b74a-42eb-97ec-28680f9999a4-kube-api-access-tm86s\") pod \"cinder-scheduler-0\" (UID: \"8e8f74b8-b74a-42eb-97ec-28680f9999a4\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.209837 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e8f74b8-b74a-42eb-97ec-28680f9999a4-scripts\") pod \"cinder-scheduler-0\" (UID: \"8e8f74b8-b74a-42eb-97ec-28680f9999a4\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.209879 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e8f74b8-b74a-42eb-97ec-28680f9999a4-config-data\") pod \"cinder-scheduler-0\" (UID: \"8e8f74b8-b74a-42eb-97ec-28680f9999a4\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.209901 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e8f74b8-b74a-42eb-97ec-28680f9999a4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8e8f74b8-b74a-42eb-97ec-28680f9999a4\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.210303 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e8f74b8-b74a-42eb-97ec-28680f9999a4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8e8f74b8-b74a-42eb-97ec-28680f9999a4\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.221122 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e8f74b8-b74a-42eb-97ec-28680f9999a4-config-data\") pod \"cinder-scheduler-0\" (UID: \"8e8f74b8-b74a-42eb-97ec-28680f9999a4\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.227018 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e8f74b8-b74a-42eb-97ec-28680f9999a4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8e8f74b8-b74a-42eb-97ec-28680f9999a4\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.227678 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e8f74b8-b74a-42eb-97ec-28680f9999a4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8e8f74b8-b74a-42eb-97ec-28680f9999a4\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.228075 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e8f74b8-b74a-42eb-97ec-28680f9999a4-scripts\") pod \"cinder-scheduler-0\" (UID: \"8e8f74b8-b74a-42eb-97ec-28680f9999a4\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.232815 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm86s\" (UniqueName: \"kubernetes.io/projected/8e8f74b8-b74a-42eb-97ec-28680f9999a4-kube-api-access-tm86s\") pod \"cinder-scheduler-0\" (UID: \"8e8f74b8-b74a-42eb-97ec-28680f9999a4\") " pod="openstack/cinder-scheduler-0" Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.424661 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.566375 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-b85ffb7d4-zq54p" Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.651361 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1dfb9cf4-7f75-40c2-ade2-81a91be5ad12" containerName="ceilometer-notification-agent" containerID="cri-o://5ac0afc4326b546594b5c3642b4fd2287a1df9e69c249162c16348cdd2e33149" gracePeriod=30 Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.651403 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1dfb9cf4-7f75-40c2-ade2-81a91be5ad12" containerName="sg-core" containerID="cri-o://000713094db390ebb45420ebab5003c479fcbd14372fd96c24a71e5573905369" gracePeriod=30 Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.651332 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1dfb9cf4-7f75-40c2-ade2-81a91be5ad12" containerName="ceilometer-central-agent" containerID="cri-o://954224d40b7f240c0cce06b7020134601928aad1e5d6674925099e62f2668576" gracePeriod=30 Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.695366 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-55656d7776-js9xr" Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.810573 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75c8ddd69c-x4qc2" podUID="1d2effe7-e7d4-417b-8446-6466eda6c94c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.155:5353: i/o timeout" Dec 10 12:35:57 crc kubenswrapper[4689]: I1210 12:35:57.826709 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-b85ffb7d4-zq54p" Dec 10 12:35:58 crc kubenswrapper[4689]: I1210 12:35:58.508108 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d2effe7-e7d4-417b-8446-6466eda6c94c" path="/var/lib/kubelet/pods/1d2effe7-e7d4-417b-8446-6466eda6c94c/volumes" Dec 10 12:35:58 crc kubenswrapper[4689]: I1210 12:35:58.509054 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82df75b3-4559-47c4-bc1a-25db8a50e8bf" path="/var/lib/kubelet/pods/82df75b3-4559-47c4-bc1a-25db8a50e8bf/volumes" Dec 10 12:35:58 crc kubenswrapper[4689]: I1210 12:35:58.561297 4689 scope.go:117] "RemoveContainer" containerID="de24447dad7585acc958c5c2b10db70f639afdd61c754d185222ca81b34aa3c3" Dec 10 12:35:58 crc kubenswrapper[4689]: I1210 12:35:58.683801 4689 generic.go:334] "Generic (PLEG): container finished" podID="1dfb9cf4-7f75-40c2-ade2-81a91be5ad12" containerID="000713094db390ebb45420ebab5003c479fcbd14372fd96c24a71e5573905369" exitCode=2 Dec 10 12:35:58 crc kubenswrapper[4689]: I1210 12:35:58.684058 4689 generic.go:334] "Generic (PLEG): container finished" podID="1dfb9cf4-7f75-40c2-ade2-81a91be5ad12" containerID="5ac0afc4326b546594b5c3642b4fd2287a1df9e69c249162c16348cdd2e33149" exitCode=0 Dec 10 12:35:58 crc kubenswrapper[4689]: I1210 12:35:58.684075 4689 generic.go:334] "Generic (PLEG): container finished" podID="1dfb9cf4-7f75-40c2-ade2-81a91be5ad12" containerID="954224d40b7f240c0cce06b7020134601928aad1e5d6674925099e62f2668576" exitCode=0 Dec 10 12:35:58 crc kubenswrapper[4689]: I1210 12:35:58.683827 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12","Type":"ContainerDied","Data":"000713094db390ebb45420ebab5003c479fcbd14372fd96c24a71e5573905369"} Dec 10 12:35:58 crc kubenswrapper[4689]: I1210 12:35:58.684177 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12","Type":"ContainerDied","Data":"5ac0afc4326b546594b5c3642b4fd2287a1df9e69c249162c16348cdd2e33149"} Dec 10 12:35:58 crc kubenswrapper[4689]: I1210 12:35:58.684203 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12","Type":"ContainerDied","Data":"954224d40b7f240c0cce06b7020134601928aad1e5d6674925099e62f2668576"} Dec 10 12:35:58 crc kubenswrapper[4689]: I1210 12:35:58.692503 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b98b8dd66-xfv7n" event={"ID":"26140423-1fa8-498b-b1da-487de5d0635f","Type":"ContainerDied","Data":"591de0c31817a291c74301bbdb13c5f35e5e789484dc752a97464434ad5080e0"} Dec 10 12:35:58 crc kubenswrapper[4689]: I1210 12:35:58.692539 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="591de0c31817a291c74301bbdb13c5f35e5e789484dc752a97464434ad5080e0" Dec 10 12:35:58 crc kubenswrapper[4689]: I1210 12:35:58.692880 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b98b8dd66-xfv7n" Dec 10 12:35:58 crc kubenswrapper[4689]: I1210 12:35:58.743916 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26140423-1fa8-498b-b1da-487de5d0635f-combined-ca-bundle\") pod \"26140423-1fa8-498b-b1da-487de5d0635f\" (UID: \"26140423-1fa8-498b-b1da-487de5d0635f\") " Dec 10 12:35:58 crc kubenswrapper[4689]: I1210 12:35:58.744018 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqk94\" (UniqueName: \"kubernetes.io/projected/26140423-1fa8-498b-b1da-487de5d0635f-kube-api-access-bqk94\") pod \"26140423-1fa8-498b-b1da-487de5d0635f\" (UID: \"26140423-1fa8-498b-b1da-487de5d0635f\") " Dec 10 12:35:58 crc kubenswrapper[4689]: I1210 12:35:58.744161 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26140423-1fa8-498b-b1da-487de5d0635f-config-data-custom\") pod \"26140423-1fa8-498b-b1da-487de5d0635f\" (UID: \"26140423-1fa8-498b-b1da-487de5d0635f\") " Dec 10 12:35:58 crc kubenswrapper[4689]: I1210 12:35:58.744189 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26140423-1fa8-498b-b1da-487de5d0635f-config-data\") pod \"26140423-1fa8-498b-b1da-487de5d0635f\" (UID: \"26140423-1fa8-498b-b1da-487de5d0635f\") " Dec 10 12:35:58 crc kubenswrapper[4689]: I1210 12:35:58.744293 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26140423-1fa8-498b-b1da-487de5d0635f-logs\") pod \"26140423-1fa8-498b-b1da-487de5d0635f\" (UID: \"26140423-1fa8-498b-b1da-487de5d0635f\") " Dec 10 12:35:58 crc kubenswrapper[4689]: I1210 12:35:58.748636 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26140423-1fa8-498b-b1da-487de5d0635f-logs" (OuterVolumeSpecName: "logs") pod "26140423-1fa8-498b-b1da-487de5d0635f" (UID: "26140423-1fa8-498b-b1da-487de5d0635f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:35:58 crc kubenswrapper[4689]: I1210 12:35:58.752668 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26140423-1fa8-498b-b1da-487de5d0635f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "26140423-1fa8-498b-b1da-487de5d0635f" (UID: "26140423-1fa8-498b-b1da-487de5d0635f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:58 crc kubenswrapper[4689]: I1210 12:35:58.765883 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26140423-1fa8-498b-b1da-487de5d0635f-kube-api-access-bqk94" (OuterVolumeSpecName: "kube-api-access-bqk94") pod "26140423-1fa8-498b-b1da-487de5d0635f" (UID: "26140423-1fa8-498b-b1da-487de5d0635f"). InnerVolumeSpecName "kube-api-access-bqk94". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:35:58 crc kubenswrapper[4689]: I1210 12:35:58.805796 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26140423-1fa8-498b-b1da-487de5d0635f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26140423-1fa8-498b-b1da-487de5d0635f" (UID: "26140423-1fa8-498b-b1da-487de5d0635f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:58 crc kubenswrapper[4689]: I1210 12:35:58.843145 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26140423-1fa8-498b-b1da-487de5d0635f-config-data" (OuterVolumeSpecName: "config-data") pod "26140423-1fa8-498b-b1da-487de5d0635f" (UID: "26140423-1fa8-498b-b1da-487de5d0635f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:35:58 crc kubenswrapper[4689]: I1210 12:35:58.885808 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26140423-1fa8-498b-b1da-487de5d0635f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:58 crc kubenswrapper[4689]: I1210 12:35:58.885852 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqk94\" (UniqueName: \"kubernetes.io/projected/26140423-1fa8-498b-b1da-487de5d0635f-kube-api-access-bqk94\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:58 crc kubenswrapper[4689]: I1210 12:35:58.885864 4689 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26140423-1fa8-498b-b1da-487de5d0635f-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:58 crc kubenswrapper[4689]: I1210 12:35:58.885874 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26140423-1fa8-498b-b1da-487de5d0635f-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:58 crc kubenswrapper[4689]: I1210 12:35:58.885887 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26140423-1fa8-498b-b1da-487de5d0635f-logs\") on node \"crc\" DevicePath \"\"" Dec 10 12:35:58 crc kubenswrapper[4689]: I1210 12:35:58.888847 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-dbcd8ff8b-j52fs"] Dec 10 12:35:59 crc kubenswrapper[4689]: I1210 12:35:59.702770 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b98b8dd66-xfv7n" Dec 10 12:35:59 crc kubenswrapper[4689]: I1210 12:35:59.740094 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7b98b8dd66-xfv7n"] Dec 10 12:35:59 crc kubenswrapper[4689]: I1210 12:35:59.747457 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7b98b8dd66-xfv7n"] Dec 10 12:35:59 crc kubenswrapper[4689]: I1210 12:35:59.981429 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-67dc569cfc-xmxfj" Dec 10 12:36:00 crc kubenswrapper[4689]: I1210 12:36:00.040436 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-55656d7776-js9xr"] Dec 10 12:36:00 crc kubenswrapper[4689]: I1210 12:36:00.041518 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-55656d7776-js9xr" podUID="8fe1b3ff-4655-40d9-94cf-9d99dd1066db" containerName="neutron-api" containerID="cri-o://654f818c5ad0f2486c2a51555841b2054445f4dc23d95c09eaab6bd7d6551d5f" gracePeriod=30 Dec 10 12:36:00 crc kubenswrapper[4689]: I1210 12:36:00.041986 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-55656d7776-js9xr" podUID="8fe1b3ff-4655-40d9-94cf-9d99dd1066db" containerName="neutron-httpd" containerID="cri-o://c25e71722696a1c06289352693782f2edb7668385834ffd8c13cc3ba2ee912fb" gracePeriod=30 Dec 10 12:36:00 crc kubenswrapper[4689]: I1210 12:36:00.554002 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26140423-1fa8-498b-b1da-487de5d0635f" path="/var/lib/kubelet/pods/26140423-1fa8-498b-b1da-487de5d0635f/volumes" Dec 10 12:36:00 crc kubenswrapper[4689]: I1210 12:36:00.712859 4689 generic.go:334] "Generic (PLEG): container finished" podID="8fe1b3ff-4655-40d9-94cf-9d99dd1066db" containerID="c25e71722696a1c06289352693782f2edb7668385834ffd8c13cc3ba2ee912fb" exitCode=0 Dec 10 12:36:00 crc kubenswrapper[4689]: I1210 12:36:00.712896 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55656d7776-js9xr" event={"ID":"8fe1b3ff-4655-40d9-94cf-9d99dd1066db","Type":"ContainerDied","Data":"c25e71722696a1c06289352693782f2edb7668385834ffd8c13cc3ba2ee912fb"} Dec 10 12:36:01 crc kubenswrapper[4689]: I1210 12:36:01.630914 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 10 12:36:01 crc kubenswrapper[4689]: E1210 12:36:01.631650 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26140423-1fa8-498b-b1da-487de5d0635f" containerName="barbican-api-log" Dec 10 12:36:01 crc kubenswrapper[4689]: I1210 12:36:01.631673 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="26140423-1fa8-498b-b1da-487de5d0635f" containerName="barbican-api-log" Dec 10 12:36:01 crc kubenswrapper[4689]: E1210 12:36:01.631710 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26140423-1fa8-498b-b1da-487de5d0635f" containerName="barbican-api" Dec 10 12:36:01 crc kubenswrapper[4689]: I1210 12:36:01.631719 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="26140423-1fa8-498b-b1da-487de5d0635f" containerName="barbican-api" Dec 10 12:36:01 crc kubenswrapper[4689]: I1210 12:36:01.631941 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="26140423-1fa8-498b-b1da-487de5d0635f" containerName="barbican-api" Dec 10 12:36:01 crc kubenswrapper[4689]: I1210 12:36:01.631993 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="26140423-1fa8-498b-b1da-487de5d0635f" containerName="barbican-api-log" Dec 10 12:36:01 crc kubenswrapper[4689]: I1210 12:36:01.632680 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 10 12:36:01 crc kubenswrapper[4689]: I1210 12:36:01.641529 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-jmtwk" Dec 10 12:36:01 crc kubenswrapper[4689]: I1210 12:36:01.641656 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 10 12:36:01 crc kubenswrapper[4689]: I1210 12:36:01.641705 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 10 12:36:01 crc kubenswrapper[4689]: I1210 12:36:01.644703 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 10 12:36:01 crc kubenswrapper[4689]: I1210 12:36:01.747770 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ba6aa57-fcd5-4e81-aeec-18115df06abb-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9ba6aa57-fcd5-4e81-aeec-18115df06abb\") " pod="openstack/openstackclient" Dec 10 12:36:01 crc kubenswrapper[4689]: I1210 12:36:01.748078 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndl8g\" (UniqueName: \"kubernetes.io/projected/9ba6aa57-fcd5-4e81-aeec-18115df06abb-kube-api-access-ndl8g\") pod \"openstackclient\" (UID: \"9ba6aa57-fcd5-4e81-aeec-18115df06abb\") " pod="openstack/openstackclient" Dec 10 12:36:01 crc kubenswrapper[4689]: I1210 12:36:01.748125 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9ba6aa57-fcd5-4e81-aeec-18115df06abb-openstack-config-secret\") pod \"openstackclient\" (UID: \"9ba6aa57-fcd5-4e81-aeec-18115df06abb\") " pod="openstack/openstackclient" Dec 10 12:36:01 crc kubenswrapper[4689]: I1210 12:36:01.748451 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9ba6aa57-fcd5-4e81-aeec-18115df06abb-openstack-config\") pod \"openstackclient\" (UID: \"9ba6aa57-fcd5-4e81-aeec-18115df06abb\") " pod="openstack/openstackclient" Dec 10 12:36:01 crc kubenswrapper[4689]: I1210 12:36:01.850228 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9ba6aa57-fcd5-4e81-aeec-18115df06abb-openstack-config-secret\") pod \"openstackclient\" (UID: \"9ba6aa57-fcd5-4e81-aeec-18115df06abb\") " pod="openstack/openstackclient" Dec 10 12:36:01 crc kubenswrapper[4689]: I1210 12:36:01.850366 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9ba6aa57-fcd5-4e81-aeec-18115df06abb-openstack-config\") pod \"openstackclient\" (UID: \"9ba6aa57-fcd5-4e81-aeec-18115df06abb\") " pod="openstack/openstackclient" Dec 10 12:36:01 crc kubenswrapper[4689]: I1210 12:36:01.850410 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ba6aa57-fcd5-4e81-aeec-18115df06abb-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9ba6aa57-fcd5-4e81-aeec-18115df06abb\") " pod="openstack/openstackclient" Dec 10 12:36:01 crc kubenswrapper[4689]: I1210 12:36:01.850497 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndl8g\" (UniqueName: \"kubernetes.io/projected/9ba6aa57-fcd5-4e81-aeec-18115df06abb-kube-api-access-ndl8g\") pod \"openstackclient\" (UID: \"9ba6aa57-fcd5-4e81-aeec-18115df06abb\") " pod="openstack/openstackclient" Dec 10 12:36:01 crc kubenswrapper[4689]: I1210 12:36:01.851357 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9ba6aa57-fcd5-4e81-aeec-18115df06abb-openstack-config\") pod \"openstackclient\" (UID: \"9ba6aa57-fcd5-4e81-aeec-18115df06abb\") " pod="openstack/openstackclient" Dec 10 12:36:01 crc kubenswrapper[4689]: I1210 12:36:01.856097 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9ba6aa57-fcd5-4e81-aeec-18115df06abb-openstack-config-secret\") pod \"openstackclient\" (UID: \"9ba6aa57-fcd5-4e81-aeec-18115df06abb\") " pod="openstack/openstackclient" Dec 10 12:36:01 crc kubenswrapper[4689]: I1210 12:36:01.862543 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ba6aa57-fcd5-4e81-aeec-18115df06abb-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9ba6aa57-fcd5-4e81-aeec-18115df06abb\") " pod="openstack/openstackclient" Dec 10 12:36:01 crc kubenswrapper[4689]: I1210 12:36:01.880674 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndl8g\" (UniqueName: \"kubernetes.io/projected/9ba6aa57-fcd5-4e81-aeec-18115df06abb-kube-api-access-ndl8g\") pod \"openstackclient\" (UID: \"9ba6aa57-fcd5-4e81-aeec-18115df06abb\") " pod="openstack/openstackclient" Dec 10 12:36:01 crc kubenswrapper[4689]: I1210 12:36:01.955679 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 10 12:36:02 crc kubenswrapper[4689]: I1210 12:36:02.741514 4689 generic.go:334] "Generic (PLEG): container finished" podID="8fe1b3ff-4655-40d9-94cf-9d99dd1066db" containerID="654f818c5ad0f2486c2a51555841b2054445f4dc23d95c09eaab6bd7d6551d5f" exitCode=0 Dec 10 12:36:02 crc kubenswrapper[4689]: I1210 12:36:02.741571 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55656d7776-js9xr" event={"ID":"8fe1b3ff-4655-40d9-94cf-9d99dd1066db","Type":"ContainerDied","Data":"654f818c5ad0f2486c2a51555841b2054445f4dc23d95c09eaab6bd7d6551d5f"} Dec 10 12:36:02 crc kubenswrapper[4689]: I1210 12:36:02.817334 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7b98b8dd66-xfv7n" podUID="26140423-1fa8-498b-b1da-487de5d0635f" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 12:36:02 crc kubenswrapper[4689]: I1210 12:36:02.817352 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7b98b8dd66-xfv7n" podUID="26140423-1fa8-498b-b1da-487de5d0635f" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": dial tcp 10.217.0.156:9311: i/o timeout (Client.Timeout exceeded while awaiting headers)" Dec 10 12:36:05 crc kubenswrapper[4689]: I1210 12:36:05.210836 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-799574f579-slqvw"] Dec 10 12:36:05 crc kubenswrapper[4689]: I1210 12:36:05.212860 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-799574f579-slqvw" Dec 10 12:36:05 crc kubenswrapper[4689]: I1210 12:36:05.215391 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 10 12:36:05 crc kubenswrapper[4689]: I1210 12:36:05.216616 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 10 12:36:05 crc kubenswrapper[4689]: I1210 12:36:05.217178 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 10 12:36:05 crc kubenswrapper[4689]: I1210 12:36:05.234491 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-799574f579-slqvw"] Dec 10 12:36:05 crc kubenswrapper[4689]: I1210 12:36:05.315961 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7a378a34-ca1b-41cb-85d9-97d124f4f6dc-etc-swift\") pod \"swift-proxy-799574f579-slqvw\" (UID: \"7a378a34-ca1b-41cb-85d9-97d124f4f6dc\") " pod="openstack/swift-proxy-799574f579-slqvw" Dec 10 12:36:05 crc kubenswrapper[4689]: I1210 12:36:05.316326 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a378a34-ca1b-41cb-85d9-97d124f4f6dc-combined-ca-bundle\") pod \"swift-proxy-799574f579-slqvw\" (UID: \"7a378a34-ca1b-41cb-85d9-97d124f4f6dc\") " pod="openstack/swift-proxy-799574f579-slqvw" Dec 10 12:36:05 crc kubenswrapper[4689]: I1210 12:36:05.316454 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a378a34-ca1b-41cb-85d9-97d124f4f6dc-run-httpd\") pod \"swift-proxy-799574f579-slqvw\" (UID: \"7a378a34-ca1b-41cb-85d9-97d124f4f6dc\") " pod="openstack/swift-proxy-799574f579-slqvw" Dec 10 12:36:05 crc kubenswrapper[4689]: I1210 12:36:05.316631 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a378a34-ca1b-41cb-85d9-97d124f4f6dc-log-httpd\") pod \"swift-proxy-799574f579-slqvw\" (UID: \"7a378a34-ca1b-41cb-85d9-97d124f4f6dc\") " pod="openstack/swift-proxy-799574f579-slqvw" Dec 10 12:36:05 crc kubenswrapper[4689]: I1210 12:36:05.316776 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a378a34-ca1b-41cb-85d9-97d124f4f6dc-config-data\") pod \"swift-proxy-799574f579-slqvw\" (UID: \"7a378a34-ca1b-41cb-85d9-97d124f4f6dc\") " pod="openstack/swift-proxy-799574f579-slqvw" Dec 10 12:36:05 crc kubenswrapper[4689]: I1210 12:36:05.316931 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a378a34-ca1b-41cb-85d9-97d124f4f6dc-internal-tls-certs\") pod \"swift-proxy-799574f579-slqvw\" (UID: \"7a378a34-ca1b-41cb-85d9-97d124f4f6dc\") " pod="openstack/swift-proxy-799574f579-slqvw" Dec 10 12:36:05 crc kubenswrapper[4689]: I1210 12:36:05.317130 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a378a34-ca1b-41cb-85d9-97d124f4f6dc-public-tls-certs\") pod \"swift-proxy-799574f579-slqvw\" (UID: \"7a378a34-ca1b-41cb-85d9-97d124f4f6dc\") " pod="openstack/swift-proxy-799574f579-slqvw" Dec 10 12:36:05 crc kubenswrapper[4689]: I1210 12:36:05.317256 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzzmt\" (UniqueName: \"kubernetes.io/projected/7a378a34-ca1b-41cb-85d9-97d124f4f6dc-kube-api-access-nzzmt\") pod \"swift-proxy-799574f579-slqvw\" (UID: \"7a378a34-ca1b-41cb-85d9-97d124f4f6dc\") " pod="openstack/swift-proxy-799574f579-slqvw" Dec 10 12:36:05 crc kubenswrapper[4689]: I1210 12:36:05.419416 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7a378a34-ca1b-41cb-85d9-97d124f4f6dc-etc-swift\") pod \"swift-proxy-799574f579-slqvw\" (UID: \"7a378a34-ca1b-41cb-85d9-97d124f4f6dc\") " pod="openstack/swift-proxy-799574f579-slqvw" Dec 10 12:36:05 crc kubenswrapper[4689]: I1210 12:36:05.419471 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a378a34-ca1b-41cb-85d9-97d124f4f6dc-combined-ca-bundle\") pod \"swift-proxy-799574f579-slqvw\" (UID: \"7a378a34-ca1b-41cb-85d9-97d124f4f6dc\") " pod="openstack/swift-proxy-799574f579-slqvw" Dec 10 12:36:05 crc kubenswrapper[4689]: I1210 12:36:05.419518 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a378a34-ca1b-41cb-85d9-97d124f4f6dc-run-httpd\") pod \"swift-proxy-799574f579-slqvw\" (UID: \"7a378a34-ca1b-41cb-85d9-97d124f4f6dc\") " pod="openstack/swift-proxy-799574f579-slqvw" Dec 10 12:36:05 crc kubenswrapper[4689]: I1210 12:36:05.419604 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a378a34-ca1b-41cb-85d9-97d124f4f6dc-log-httpd\") pod \"swift-proxy-799574f579-slqvw\" (UID: \"7a378a34-ca1b-41cb-85d9-97d124f4f6dc\") " pod="openstack/swift-proxy-799574f579-slqvw" Dec 10 12:36:05 crc kubenswrapper[4689]: I1210 12:36:05.419656 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a378a34-ca1b-41cb-85d9-97d124f4f6dc-config-data\") pod \"swift-proxy-799574f579-slqvw\" (UID: \"7a378a34-ca1b-41cb-85d9-97d124f4f6dc\") " pod="openstack/swift-proxy-799574f579-slqvw" Dec 10 12:36:05 crc kubenswrapper[4689]: I1210 12:36:05.419710 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a378a34-ca1b-41cb-85d9-97d124f4f6dc-internal-tls-certs\") pod \"swift-proxy-799574f579-slqvw\" (UID: \"7a378a34-ca1b-41cb-85d9-97d124f4f6dc\") " pod="openstack/swift-proxy-799574f579-slqvw" Dec 10 12:36:05 crc kubenswrapper[4689]: I1210 12:36:05.420050 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a378a34-ca1b-41cb-85d9-97d124f4f6dc-public-tls-certs\") pod \"swift-proxy-799574f579-slqvw\" (UID: \"7a378a34-ca1b-41cb-85d9-97d124f4f6dc\") " pod="openstack/swift-proxy-799574f579-slqvw" Dec 10 12:36:05 crc kubenswrapper[4689]: I1210 12:36:05.420179 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzzmt\" (UniqueName: \"kubernetes.io/projected/7a378a34-ca1b-41cb-85d9-97d124f4f6dc-kube-api-access-nzzmt\") pod \"swift-proxy-799574f579-slqvw\" (UID: \"7a378a34-ca1b-41cb-85d9-97d124f4f6dc\") " pod="openstack/swift-proxy-799574f579-slqvw" Dec 10 12:36:05 crc kubenswrapper[4689]: I1210 12:36:05.420520 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a378a34-ca1b-41cb-85d9-97d124f4f6dc-log-httpd\") pod \"swift-proxy-799574f579-slqvw\" (UID: \"7a378a34-ca1b-41cb-85d9-97d124f4f6dc\") " pod="openstack/swift-proxy-799574f579-slqvw" Dec 10 12:36:05 crc kubenswrapper[4689]: I1210 12:36:05.425384 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a378a34-ca1b-41cb-85d9-97d124f4f6dc-public-tls-certs\") pod \"swift-proxy-799574f579-slqvw\" (UID: \"7a378a34-ca1b-41cb-85d9-97d124f4f6dc\") " pod="openstack/swift-proxy-799574f579-slqvw" Dec 10 12:36:05 crc kubenswrapper[4689]: I1210 12:36:05.425474 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a378a34-ca1b-41cb-85d9-97d124f4f6dc-run-httpd\") pod \"swift-proxy-799574f579-slqvw\" (UID: \"7a378a34-ca1b-41cb-85d9-97d124f4f6dc\") " pod="openstack/swift-proxy-799574f579-slqvw" Dec 10 12:36:05 crc kubenswrapper[4689]: I1210 12:36:05.426715 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7a378a34-ca1b-41cb-85d9-97d124f4f6dc-etc-swift\") pod \"swift-proxy-799574f579-slqvw\" (UID: \"7a378a34-ca1b-41cb-85d9-97d124f4f6dc\") " pod="openstack/swift-proxy-799574f579-slqvw" Dec 10 12:36:05 crc kubenswrapper[4689]: I1210 12:36:05.428189 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a378a34-ca1b-41cb-85d9-97d124f4f6dc-config-data\") pod \"swift-proxy-799574f579-slqvw\" (UID: \"7a378a34-ca1b-41cb-85d9-97d124f4f6dc\") " pod="openstack/swift-proxy-799574f579-slqvw" Dec 10 12:36:05 crc kubenswrapper[4689]: I1210 12:36:05.428251 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a378a34-ca1b-41cb-85d9-97d124f4f6dc-internal-tls-certs\") pod \"swift-proxy-799574f579-slqvw\" (UID: \"7a378a34-ca1b-41cb-85d9-97d124f4f6dc\") " pod="openstack/swift-proxy-799574f579-slqvw" Dec 10 12:36:05 crc kubenswrapper[4689]: I1210 12:36:05.436212 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a378a34-ca1b-41cb-85d9-97d124f4f6dc-combined-ca-bundle\") pod \"swift-proxy-799574f579-slqvw\" (UID: \"7a378a34-ca1b-41cb-85d9-97d124f4f6dc\") " pod="openstack/swift-proxy-799574f579-slqvw" Dec 10 12:36:05 crc kubenswrapper[4689]: I1210 12:36:05.443642 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzzmt\" (UniqueName: \"kubernetes.io/projected/7a378a34-ca1b-41cb-85d9-97d124f4f6dc-kube-api-access-nzzmt\") pod \"swift-proxy-799574f579-slqvw\" (UID: \"7a378a34-ca1b-41cb-85d9-97d124f4f6dc\") " pod="openstack/swift-proxy-799574f579-slqvw" Dec 10 12:36:05 crc kubenswrapper[4689]: I1210 12:36:05.532538 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-799574f579-slqvw" Dec 10 12:36:08 crc kubenswrapper[4689]: I1210 12:36:08.802801 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12","Type":"ContainerDied","Data":"5a25e1c94167f5fa20abe190fe321119792df394ecfb710c414eb1c2ef7b4197"} Dec 10 12:36:08 crc kubenswrapper[4689]: I1210 12:36:08.803649 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a25e1c94167f5fa20abe190fe321119792df394ecfb710c414eb1c2ef7b4197" Dec 10 12:36:08 crc kubenswrapper[4689]: I1210 12:36:08.804237 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-dbcd8ff8b-j52fs" event={"ID":"8c984e40-d3ee-426e-ab51-c576bc699e11","Type":"ContainerStarted","Data":"1f57fa972329aff37afbb65292a1af14dac67c6ef53098396fa1bcc2ca4502fd"} Dec 10 12:36:08 crc kubenswrapper[4689]: I1210 12:36:08.840505 4689 scope.go:117] "RemoveContainer" containerID="1daa894f6e120e11933b91a544ebd3f573cd492f883b879daf51088fcdbe7b65" Dec 10 12:36:08 crc kubenswrapper[4689]: I1210 12:36:08.994346 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.039706 4689 scope.go:117] "RemoveContainer" containerID="87a779a65ac7cc1b3b571e5eb66b3944ad7307e58627bcc58c0a24808135052c" Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.094338 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-log-httpd\") pod \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\" (UID: \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\") " Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.094696 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvxgw\" (UniqueName: \"kubernetes.io/projected/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-kube-api-access-hvxgw\") pod \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\" (UID: \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\") " Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.094734 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-scripts\") pod \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\" (UID: \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\") " Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.094755 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-sg-core-conf-yaml\") pod \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\" (UID: \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\") " Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.094754 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1dfb9cf4-7f75-40c2-ade2-81a91be5ad12" (UID: "1dfb9cf4-7f75-40c2-ade2-81a91be5ad12"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.094790 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-combined-ca-bundle\") pod \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\" (UID: \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\") " Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.094856 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-config-data\") pod \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\" (UID: \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\") " Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.094892 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-run-httpd\") pod \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\" (UID: \"1dfb9cf4-7f75-40c2-ade2-81a91be5ad12\") " Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.095264 4689 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.095777 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1dfb9cf4-7f75-40c2-ade2-81a91be5ad12" (UID: "1dfb9cf4-7f75-40c2-ade2-81a91be5ad12"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.106723 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-scripts" (OuterVolumeSpecName: "scripts") pod "1dfb9cf4-7f75-40c2-ade2-81a91be5ad12" (UID: "1dfb9cf4-7f75-40c2-ade2-81a91be5ad12"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.113026 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-kube-api-access-hvxgw" (OuterVolumeSpecName: "kube-api-access-hvxgw") pod "1dfb9cf4-7f75-40c2-ade2-81a91be5ad12" (UID: "1dfb9cf4-7f75-40c2-ade2-81a91be5ad12"). InnerVolumeSpecName "kube-api-access-hvxgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.138423 4689 scope.go:117] "RemoveContainer" containerID="696d38016f7fdcb0e0495c2becf2de963a862979fa2db40147053961d1bdb384" Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.159470 4689 scope.go:117] "RemoveContainer" containerID="eaecad246f102656b9ed24adfcd9348d2a81524c29339a6784e65b905fb4801e" Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.196731 4689 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.196761 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvxgw\" (UniqueName: \"kubernetes.io/projected/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-kube-api-access-hvxgw\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.196770 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.199756 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1dfb9cf4-7f75-40c2-ade2-81a91be5ad12" (UID: "1dfb9cf4-7f75-40c2-ade2-81a91be5ad12"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.230299 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1dfb9cf4-7f75-40c2-ade2-81a91be5ad12" (UID: "1dfb9cf4-7f75-40c2-ade2-81a91be5ad12"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.234846 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-9zldg"] Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.236673 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-config-data" (OuterVolumeSpecName: "config-data") pod "1dfb9cf4-7f75-40c2-ade2-81a91be5ad12" (UID: "1dfb9cf4-7f75-40c2-ade2-81a91be5ad12"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.298401 4689 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.298434 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.298444 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:09 crc kubenswrapper[4689]: W1210 12:36:09.338071 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3033a7b9_9374_47c4_89a2_188204ccd941.slice/crio-5e9c385bf15b979e1e63a39a730472e1af17757293c91a3c065571c01e09d8d4 WatchSource:0}: Error finding container 5e9c385bf15b979e1e63a39a730472e1af17757293c91a3c065571c01e09d8d4: Status 404 returned error can't find the container with id 5e9c385bf15b979e1e63a39a730472e1af17757293c91a3c065571c01e09d8d4 Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.348418 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.460404 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 12:36:09 crc kubenswrapper[4689]: W1210 12:36:09.461931 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e8f74b8_b74a_42eb_97ec_28680f9999a4.slice/crio-da4a8c3ec0d5c5a5b62e63f1862aa41ab0e0b6294860c5ad46059f37c5016065 WatchSource:0}: Error finding container da4a8c3ec0d5c5a5b62e63f1862aa41ab0e0b6294860c5ad46059f37c5016065: Status 404 returned error can't find the container with id da4a8c3ec0d5c5a5b62e63f1862aa41ab0e0b6294860c5ad46059f37c5016065 Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.574026 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.613678 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-799574f579-slqvw"] Dec 10 12:36:09 crc kubenswrapper[4689]: E1210 12:36:09.620023 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/ironic-python-agent:current-podified" Dec 10 12:36:09 crc kubenswrapper[4689]: E1210 12:36:09.620208 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ironic-python-agent-init,Image:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DEST_DIR,Value:/var/lib/ironic/httpboot,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-merged,ReadOnly:false,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-podinfo,ReadOnly:false,MountPath:/etc/podinfo,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-ironic,ReadOnly:false,MountPath:/var/lib/ironic,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-custom,ReadOnly:true,MountPath:/var/lib/config-data/custom,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5h4mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-conductor-0_openstack(23e46f1d-5919-4baa-aeef-1364104b63fb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 12:36:09 crc kubenswrapper[4689]: E1210 12:36:09.621296 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-python-agent-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ironic-conductor-0" podUID="23e46f1d-5919-4baa-aeef-1364104b63fb" Dec 10 12:36:09 crc kubenswrapper[4689]: W1210 12:36:09.647688 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a378a34_ca1b_41cb_85d9_97d124f4f6dc.slice/crio-210a30a7936102a51c49423fec4858d4020b15a7952fe37b7147dc2c7db6b3ec WatchSource:0}: Error finding container 210a30a7936102a51c49423fec4858d4020b15a7952fe37b7147dc2c7db6b3ec: Status 404 returned error can't find the container with id 210a30a7936102a51c49423fec4858d4020b15a7952fe37b7147dc2c7db6b3ec Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.908670 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3033a7b9-9374-47c4-89a2-188204ccd941","Type":"ContainerStarted","Data":"5e9c385bf15b979e1e63a39a730472e1af17757293c91a3c065571c01e09d8d4"} Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.915287 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-dbcd8ff8b-j52fs" event={"ID":"8c984e40-d3ee-426e-ab51-c576bc699e11","Type":"ContainerStarted","Data":"440e5e00d237ea7a7eb4c36584e34de60f309b21f140cdf45aa35bf583087c76"} Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.917048 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-799574f579-slqvw" event={"ID":"7a378a34-ca1b-41cb-85d9-97d124f4f6dc","Type":"ContainerStarted","Data":"210a30a7936102a51c49423fec4858d4020b15a7952fe37b7147dc2c7db6b3ec"} Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.923204 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6d98945548-q82m5" event={"ID":"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3","Type":"ContainerStarted","Data":"15ed75b995654c29cca38d687dc7faaf5d495d0d44110cecb9d87ce282a9be40"} Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.955823 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-9zldg" event={"ID":"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43","Type":"ContainerStarted","Data":"ddf542afc3a7bcc1ba66e53a8ccbb344866e6347bd910a195a8cab65caf115c8"} Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.961402 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55656d7776-js9xr" Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.964558 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-6f4566d7bf-hkx2g" event={"ID":"234f8267-1974-4f9e-9d13-8a239ff2660c","Type":"ContainerStarted","Data":"604e75cab9f94d67e8f119e9ac3f5bae4190da6dadf8fd9bece4d7a84522932b"} Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.965447 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-6f4566d7bf-hkx2g" Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.967752 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9ba6aa57-fcd5-4e81-aeec-18115df06abb","Type":"ContainerStarted","Data":"27ed19ad61e10bd2a41d7d7621beb7e8fe1927e47038a3783759e430ddd1395e"} Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.979220 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.980930 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8e8f74b8-b74a-42eb-97ec-28680f9999a4","Type":"ContainerStarted","Data":"da4a8c3ec0d5c5a5b62e63f1862aa41ab0e0b6294860c5ad46059f37c5016065"} Dec 10 12:36:09 crc kubenswrapper[4689]: E1210 12:36:09.982411 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-python-agent-init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/ironic-python-agent:current-podified\\\"\"" pod="openstack/ironic-conductor-0" podUID="23e46f1d-5919-4baa-aeef-1364104b63fb" Dec 10 12:36:09 crc kubenswrapper[4689]: I1210 12:36:09.997628 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-neutron-agent-6f4566d7bf-hkx2g" podStartSLOduration=15.523144219 podStartE2EDuration="29.997613284s" podCreationTimestamp="2025-12-10 12:35:40 +0000 UTC" firstStartedPulling="2025-12-10 12:35:41.706596012 +0000 UTC m=+1209.494677150" lastFinishedPulling="2025-12-10 12:35:56.181065077 +0000 UTC m=+1223.969146215" observedRunningTime="2025-12-10 12:36:09.990192581 +0000 UTC m=+1237.778273719" watchObservedRunningTime="2025-12-10 12:36:09.997613284 +0000 UTC m=+1237.785694422" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.089418 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.105311 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.121169 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:36:10 crc kubenswrapper[4689]: E1210 12:36:10.122819 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dfb9cf4-7f75-40c2-ade2-81a91be5ad12" containerName="ceilometer-notification-agent" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.122840 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dfb9cf4-7f75-40c2-ade2-81a91be5ad12" containerName="ceilometer-notification-agent" Dec 10 12:36:10 crc kubenswrapper[4689]: E1210 12:36:10.122853 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dfb9cf4-7f75-40c2-ade2-81a91be5ad12" containerName="sg-core" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.122861 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dfb9cf4-7f75-40c2-ade2-81a91be5ad12" containerName="sg-core" Dec 10 12:36:10 crc kubenswrapper[4689]: E1210 12:36:10.122875 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fe1b3ff-4655-40d9-94cf-9d99dd1066db" containerName="neutron-httpd" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.124397 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fe1b3ff-4655-40d9-94cf-9d99dd1066db" containerName="neutron-httpd" Dec 10 12:36:10 crc kubenswrapper[4689]: E1210 12:36:10.124436 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fe1b3ff-4655-40d9-94cf-9d99dd1066db" containerName="neutron-api" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.124444 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fe1b3ff-4655-40d9-94cf-9d99dd1066db" containerName="neutron-api" Dec 10 12:36:10 crc kubenswrapper[4689]: E1210 12:36:10.124459 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dfb9cf4-7f75-40c2-ade2-81a91be5ad12" containerName="ceilometer-central-agent" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.124465 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dfb9cf4-7f75-40c2-ade2-81a91be5ad12" containerName="ceilometer-central-agent" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.124656 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dfb9cf4-7f75-40c2-ade2-81a91be5ad12" containerName="ceilometer-notification-agent" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.124666 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dfb9cf4-7f75-40c2-ade2-81a91be5ad12" containerName="sg-core" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.124679 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dfb9cf4-7f75-40c2-ade2-81a91be5ad12" containerName="ceilometer-central-agent" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.124690 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fe1b3ff-4655-40d9-94cf-9d99dd1066db" containerName="neutron-api" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.124700 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fe1b3ff-4655-40d9-94cf-9d99dd1066db" containerName="neutron-httpd" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.127789 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.131804 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8fe1b3ff-4655-40d9-94cf-9d99dd1066db-config\") pod \"8fe1b3ff-4655-40d9-94cf-9d99dd1066db\" (UID: \"8fe1b3ff-4655-40d9-94cf-9d99dd1066db\") " Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.131944 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fe1b3ff-4655-40d9-94cf-9d99dd1066db-ovndb-tls-certs\") pod \"8fe1b3ff-4655-40d9-94cf-9d99dd1066db\" (UID: \"8fe1b3ff-4655-40d9-94cf-9d99dd1066db\") " Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.132032 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fe1b3ff-4655-40d9-94cf-9d99dd1066db-combined-ca-bundle\") pod \"8fe1b3ff-4655-40d9-94cf-9d99dd1066db\" (UID: \"8fe1b3ff-4655-40d9-94cf-9d99dd1066db\") " Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.132154 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8fe1b3ff-4655-40d9-94cf-9d99dd1066db-httpd-config\") pod \"8fe1b3ff-4655-40d9-94cf-9d99dd1066db\" (UID: \"8fe1b3ff-4655-40d9-94cf-9d99dd1066db\") " Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.132235 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cg8lv\" (UniqueName: \"kubernetes.io/projected/8fe1b3ff-4655-40d9-94cf-9d99dd1066db-kube-api-access-cg8lv\") pod \"8fe1b3ff-4655-40d9-94cf-9d99dd1066db\" (UID: \"8fe1b3ff-4655-40d9-94cf-9d99dd1066db\") " Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.135262 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.135317 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.137664 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fe1b3ff-4655-40d9-94cf-9d99dd1066db-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "8fe1b3ff-4655-40d9-94cf-9d99dd1066db" (UID: "8fe1b3ff-4655-40d9-94cf-9d99dd1066db"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.148170 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.151019 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fe1b3ff-4655-40d9-94cf-9d99dd1066db-kube-api-access-cg8lv" (OuterVolumeSpecName: "kube-api-access-cg8lv") pod "8fe1b3ff-4655-40d9-94cf-9d99dd1066db" (UID: "8fe1b3ff-4655-40d9-94cf-9d99dd1066db"). InnerVolumeSpecName "kube-api-access-cg8lv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.208663 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fe1b3ff-4655-40d9-94cf-9d99dd1066db-config" (OuterVolumeSpecName: "config") pod "8fe1b3ff-4655-40d9-94cf-9d99dd1066db" (UID: "8fe1b3ff-4655-40d9-94cf-9d99dd1066db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.210899 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fe1b3ff-4655-40d9-94cf-9d99dd1066db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8fe1b3ff-4655-40d9-94cf-9d99dd1066db" (UID: "8fe1b3ff-4655-40d9-94cf-9d99dd1066db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.226297 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fe1b3ff-4655-40d9-94cf-9d99dd1066db-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "8fe1b3ff-4655-40d9-94cf-9d99dd1066db" (UID: "8fe1b3ff-4655-40d9-94cf-9d99dd1066db"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.234286 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d23c5773-27cf-4f04-9e99-4c47b1131134-scripts\") pod \"ceilometer-0\" (UID: \"d23c5773-27cf-4f04-9e99-4c47b1131134\") " pod="openstack/ceilometer-0" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.234335 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d23c5773-27cf-4f04-9e99-4c47b1131134-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d23c5773-27cf-4f04-9e99-4c47b1131134\") " pod="openstack/ceilometer-0" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.234417 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hjj2\" (UniqueName: \"kubernetes.io/projected/d23c5773-27cf-4f04-9e99-4c47b1131134-kube-api-access-8hjj2\") pod \"ceilometer-0\" (UID: \"d23c5773-27cf-4f04-9e99-4c47b1131134\") " pod="openstack/ceilometer-0" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.234455 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d23c5773-27cf-4f04-9e99-4c47b1131134-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d23c5773-27cf-4f04-9e99-4c47b1131134\") " pod="openstack/ceilometer-0" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.234478 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d23c5773-27cf-4f04-9e99-4c47b1131134-log-httpd\") pod \"ceilometer-0\" (UID: \"d23c5773-27cf-4f04-9e99-4c47b1131134\") " pod="openstack/ceilometer-0" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.234773 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d23c5773-27cf-4f04-9e99-4c47b1131134-config-data\") pod \"ceilometer-0\" (UID: \"d23c5773-27cf-4f04-9e99-4c47b1131134\") " pod="openstack/ceilometer-0" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.234818 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d23c5773-27cf-4f04-9e99-4c47b1131134-run-httpd\") pod \"ceilometer-0\" (UID: \"d23c5773-27cf-4f04-9e99-4c47b1131134\") " pod="openstack/ceilometer-0" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.235008 4689 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8fe1b3ff-4655-40d9-94cf-9d99dd1066db-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.235030 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cg8lv\" (UniqueName: \"kubernetes.io/projected/8fe1b3ff-4655-40d9-94cf-9d99dd1066db-kube-api-access-cg8lv\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.235043 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8fe1b3ff-4655-40d9-94cf-9d99dd1066db-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.235052 4689 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fe1b3ff-4655-40d9-94cf-9d99dd1066db-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.235060 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fe1b3ff-4655-40d9-94cf-9d99dd1066db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.336332 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d23c5773-27cf-4f04-9e99-4c47b1131134-scripts\") pod \"ceilometer-0\" (UID: \"d23c5773-27cf-4f04-9e99-4c47b1131134\") " pod="openstack/ceilometer-0" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.336380 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d23c5773-27cf-4f04-9e99-4c47b1131134-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d23c5773-27cf-4f04-9e99-4c47b1131134\") " pod="openstack/ceilometer-0" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.336465 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hjj2\" (UniqueName: \"kubernetes.io/projected/d23c5773-27cf-4f04-9e99-4c47b1131134-kube-api-access-8hjj2\") pod \"ceilometer-0\" (UID: \"d23c5773-27cf-4f04-9e99-4c47b1131134\") " pod="openstack/ceilometer-0" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.336504 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d23c5773-27cf-4f04-9e99-4c47b1131134-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d23c5773-27cf-4f04-9e99-4c47b1131134\") " pod="openstack/ceilometer-0" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.336526 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d23c5773-27cf-4f04-9e99-4c47b1131134-log-httpd\") pod \"ceilometer-0\" (UID: \"d23c5773-27cf-4f04-9e99-4c47b1131134\") " pod="openstack/ceilometer-0" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.336606 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d23c5773-27cf-4f04-9e99-4c47b1131134-config-data\") pod \"ceilometer-0\" (UID: \"d23c5773-27cf-4f04-9e99-4c47b1131134\") " pod="openstack/ceilometer-0" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.336635 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d23c5773-27cf-4f04-9e99-4c47b1131134-run-httpd\") pod \"ceilometer-0\" (UID: \"d23c5773-27cf-4f04-9e99-4c47b1131134\") " pod="openstack/ceilometer-0" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.337161 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d23c5773-27cf-4f04-9e99-4c47b1131134-run-httpd\") pod \"ceilometer-0\" (UID: \"d23c5773-27cf-4f04-9e99-4c47b1131134\") " pod="openstack/ceilometer-0" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.339758 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d23c5773-27cf-4f04-9e99-4c47b1131134-log-httpd\") pod \"ceilometer-0\" (UID: \"d23c5773-27cf-4f04-9e99-4c47b1131134\") " pod="openstack/ceilometer-0" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.339794 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d23c5773-27cf-4f04-9e99-4c47b1131134-scripts\") pod \"ceilometer-0\" (UID: \"d23c5773-27cf-4f04-9e99-4c47b1131134\") " pod="openstack/ceilometer-0" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.340161 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d23c5773-27cf-4f04-9e99-4c47b1131134-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d23c5773-27cf-4f04-9e99-4c47b1131134\") " pod="openstack/ceilometer-0" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.340236 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d23c5773-27cf-4f04-9e99-4c47b1131134-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d23c5773-27cf-4f04-9e99-4c47b1131134\") " pod="openstack/ceilometer-0" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.342888 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d23c5773-27cf-4f04-9e99-4c47b1131134-config-data\") pod \"ceilometer-0\" (UID: \"d23c5773-27cf-4f04-9e99-4c47b1131134\") " pod="openstack/ceilometer-0" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.359366 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hjj2\" (UniqueName: \"kubernetes.io/projected/d23c5773-27cf-4f04-9e99-4c47b1131134-kube-api-access-8hjj2\") pod \"ceilometer-0\" (UID: \"d23c5773-27cf-4f04-9e99-4c47b1131134\") " pod="openstack/ceilometer-0" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.519165 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dfb9cf4-7f75-40c2-ade2-81a91be5ad12" path="/var/lib/kubelet/pods/1dfb9cf4-7f75-40c2-ade2-81a91be5ad12/volumes" Dec 10 12:36:10 crc kubenswrapper[4689]: I1210 12:36:10.535618 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 12:36:11 crc kubenswrapper[4689]: I1210 12:36:11.023208 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8e8f74b8-b74a-42eb-97ec-28680f9999a4","Type":"ContainerStarted","Data":"7f0c4c83132a27ef9d3bdc9cd8653eb3fae84a874a96792bf3fcadf5a26dec20"} Dec 10 12:36:11 crc kubenswrapper[4689]: I1210 12:36:11.027422 4689 generic.go:334] "Generic (PLEG): container finished" podID="8c984e40-d3ee-426e-ab51-c576bc699e11" containerID="440e5e00d237ea7a7eb4c36584e34de60f309b21f140cdf45aa35bf583087c76" exitCode=0 Dec 10 12:36:11 crc kubenswrapper[4689]: I1210 12:36:11.027476 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-dbcd8ff8b-j52fs" event={"ID":"8c984e40-d3ee-426e-ab51-c576bc699e11","Type":"ContainerDied","Data":"440e5e00d237ea7a7eb4c36584e34de60f309b21f140cdf45aa35bf583087c76"} Dec 10 12:36:11 crc kubenswrapper[4689]: I1210 12:36:11.035252 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-799574f579-slqvw" event={"ID":"7a378a34-ca1b-41cb-85d9-97d124f4f6dc","Type":"ContainerStarted","Data":"e2e13cfd7d68a8f7bca65708d1337c99ae17b21972f66aea3c214e53c7af9f5a"} Dec 10 12:36:11 crc kubenswrapper[4689]: I1210 12:36:11.035287 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-799574f579-slqvw" event={"ID":"7a378a34-ca1b-41cb-85d9-97d124f4f6dc","Type":"ContainerStarted","Data":"6fe76c6f41b828488fe59fd944a568bed947e64cac403f9cfc8da501e0748ee9"} Dec 10 12:36:11 crc kubenswrapper[4689]: I1210 12:36:11.035785 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-799574f579-slqvw" Dec 10 12:36:11 crc kubenswrapper[4689]: I1210 12:36:11.035807 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-799574f579-slqvw" Dec 10 12:36:11 crc kubenswrapper[4689]: I1210 12:36:11.039060 4689 generic.go:334] "Generic (PLEG): container finished" podID="2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3" containerID="15ed75b995654c29cca38d687dc7faaf5d495d0d44110cecb9d87ce282a9be40" exitCode=0 Dec 10 12:36:11 crc kubenswrapper[4689]: I1210 12:36:11.039098 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6d98945548-q82m5" event={"ID":"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3","Type":"ContainerDied","Data":"15ed75b995654c29cca38d687dc7faaf5d495d0d44110cecb9d87ce282a9be40"} Dec 10 12:36:11 crc kubenswrapper[4689]: I1210 12:36:11.043206 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55656d7776-js9xr" event={"ID":"8fe1b3ff-4655-40d9-94cf-9d99dd1066db","Type":"ContainerDied","Data":"80756df373d3fef3115f7bdd5be484651668d3837702f8d8782067d49fe78d86"} Dec 10 12:36:11 crc kubenswrapper[4689]: I1210 12:36:11.043235 4689 scope.go:117] "RemoveContainer" containerID="c25e71722696a1c06289352693782f2edb7668385834ffd8c13cc3ba2ee912fb" Dec 10 12:36:11 crc kubenswrapper[4689]: I1210 12:36:11.043326 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55656d7776-js9xr" Dec 10 12:36:11 crc kubenswrapper[4689]: I1210 12:36:11.048487 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3033a7b9-9374-47c4-89a2-188204ccd941","Type":"ContainerStarted","Data":"31703c69d89daf9cb1b76ea2d4d00239d868ca904c87741443fb33e8dbcdfdf7"} Dec 10 12:36:11 crc kubenswrapper[4689]: I1210 12:36:11.088762 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-799574f579-slqvw" podStartSLOduration=6.088742134 podStartE2EDuration="6.088742134s" podCreationTimestamp="2025-12-10 12:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:36:11.074282746 +0000 UTC m=+1238.862363894" watchObservedRunningTime="2025-12-10 12:36:11.088742134 +0000 UTC m=+1238.876823272" Dec 10 12:36:11 crc kubenswrapper[4689]: I1210 12:36:11.141106 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-55656d7776-js9xr"] Dec 10 12:36:11 crc kubenswrapper[4689]: I1210 12:36:11.146290 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-55656d7776-js9xr"] Dec 10 12:36:11 crc kubenswrapper[4689]: I1210 12:36:11.147684 4689 scope.go:117] "RemoveContainer" containerID="654f818c5ad0f2486c2a51555841b2054445f4dc23d95c09eaab6bd7d6551d5f" Dec 10 12:36:11 crc kubenswrapper[4689]: I1210 12:36:11.154290 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:36:11 crc kubenswrapper[4689]: W1210 12:36:11.190785 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd23c5773_27cf_4f04_9e99_4c47b1131134.slice/crio-853708b40cfed23f2d2ec88357783e81357400b9f5ffb12af015fbe75df5240c WatchSource:0}: Error finding container 853708b40cfed23f2d2ec88357783e81357400b9f5ffb12af015fbe75df5240c: Status 404 returned error can't find the container with id 853708b40cfed23f2d2ec88357783e81357400b9f5ffb12af015fbe75df5240c Dec 10 12:36:12 crc kubenswrapper[4689]: I1210 12:36:12.065100 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d23c5773-27cf-4f04-9e99-4c47b1131134","Type":"ContainerStarted","Data":"853708b40cfed23f2d2ec88357783e81357400b9f5ffb12af015fbe75df5240c"} Dec 10 12:36:12 crc kubenswrapper[4689]: I1210 12:36:12.070614 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-dbcd8ff8b-j52fs" event={"ID":"8c984e40-d3ee-426e-ab51-c576bc699e11","Type":"ContainerStarted","Data":"af90253fd7e86e01cf0771a5aa3fff7368533dde2ea8f90572ca1e2de35143eb"} Dec 10 12:36:12 crc kubenswrapper[4689]: I1210 12:36:12.070665 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-dbcd8ff8b-j52fs" event={"ID":"8c984e40-d3ee-426e-ab51-c576bc699e11","Type":"ContainerStarted","Data":"7f124c35d7702582f3a04c0f0836bcdceac4a81d8ece1835f1435e087ada8a9e"} Dec 10 12:36:12 crc kubenswrapper[4689]: I1210 12:36:12.072256 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:36:12 crc kubenswrapper[4689]: I1210 12:36:12.085359 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6d98945548-q82m5" event={"ID":"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3","Type":"ContainerStarted","Data":"d7a9d6be8ae094340acbe17d0943794bac930e48f17e0b5905006c969a35142e"} Dec 10 12:36:12 crc kubenswrapper[4689]: I1210 12:36:12.085407 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6d98945548-q82m5" event={"ID":"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3","Type":"ContainerStarted","Data":"86c5e876556c41b0fb5e211ca25ebcfa34bb5bb7c5f211b9e610fdcbd0e2b962"} Dec 10 12:36:12 crc kubenswrapper[4689]: I1210 12:36:12.086231 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-6d98945548-q82m5" Dec 10 12:36:12 crc kubenswrapper[4689]: I1210 12:36:12.092578 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3033a7b9-9374-47c4-89a2-188204ccd941","Type":"ContainerStarted","Data":"f635d864f5545224ce75625e9c21307cfc549cab8347938a78d9f5dac4256aef"} Dec 10 12:36:12 crc kubenswrapper[4689]: I1210 12:36:12.092900 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 10 12:36:12 crc kubenswrapper[4689]: I1210 12:36:12.101543 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8e8f74b8-b74a-42eb-97ec-28680f9999a4","Type":"ContainerStarted","Data":"dea3ce6082655605eeca37a9e5e3550e92637ae2078c36e92241b2e3b4051cf1"} Dec 10 12:36:12 crc kubenswrapper[4689]: I1210 12:36:12.104335 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-dbcd8ff8b-j52fs" podStartSLOduration=25.104311585 podStartE2EDuration="25.104311585s" podCreationTimestamp="2025-12-10 12:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:36:12.098928112 +0000 UTC m=+1239.887009250" watchObservedRunningTime="2025-12-10 12:36:12.104311585 +0000 UTC m=+1239.892392743" Dec 10 12:36:12 crc kubenswrapper[4689]: I1210 12:36:12.138089 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=20.138065379 podStartE2EDuration="20.138065379s" podCreationTimestamp="2025-12-10 12:35:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:36:12.119401298 +0000 UTC m=+1239.907482436" watchObservedRunningTime="2025-12-10 12:36:12.138065379 +0000 UTC m=+1239.926146517" Dec 10 12:36:12 crc kubenswrapper[4689]: I1210 12:36:12.148599 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-6f4566d7bf-hkx2g" Dec 10 12:36:12 crc kubenswrapper[4689]: I1210 12:36:12.154485 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-6d98945548-q82m5" podStartSLOduration=17.895022466 podStartE2EDuration="32.154458834s" podCreationTimestamp="2025-12-10 12:35:40 +0000 UTC" firstStartedPulling="2025-12-10 12:35:41.92165105 +0000 UTC m=+1209.709732188" lastFinishedPulling="2025-12-10 12:35:56.181087408 +0000 UTC m=+1223.969168556" observedRunningTime="2025-12-10 12:36:12.146014995 +0000 UTC m=+1239.934096133" watchObservedRunningTime="2025-12-10 12:36:12.154458834 +0000 UTC m=+1239.942539972" Dec 10 12:36:12 crc kubenswrapper[4689]: I1210 12:36:12.170180 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=16.170157052 podStartE2EDuration="16.170157052s" podCreationTimestamp="2025-12-10 12:35:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:36:12.160427921 +0000 UTC m=+1239.948509059" watchObservedRunningTime="2025-12-10 12:36:12.170157052 +0000 UTC m=+1239.958238190" Dec 10 12:36:12 crc kubenswrapper[4689]: I1210 12:36:12.425774 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 10 12:36:12 crc kubenswrapper[4689]: I1210 12:36:12.514445 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fe1b3ff-4655-40d9-94cf-9d99dd1066db" path="/var/lib/kubelet/pods/8fe1b3ff-4655-40d9-94cf-9d99dd1066db/volumes" Dec 10 12:36:13 crc kubenswrapper[4689]: I1210 12:36:13.188814 4689 generic.go:334] "Generic (PLEG): container finished" podID="2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3" containerID="d7a9d6be8ae094340acbe17d0943794bac930e48f17e0b5905006c969a35142e" exitCode=1 Dec 10 12:36:13 crc kubenswrapper[4689]: I1210 12:36:13.189015 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6d98945548-q82m5" event={"ID":"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3","Type":"ContainerDied","Data":"d7a9d6be8ae094340acbe17d0943794bac930e48f17e0b5905006c969a35142e"} Dec 10 12:36:13 crc kubenswrapper[4689]: I1210 12:36:13.189593 4689 scope.go:117] "RemoveContainer" containerID="d7a9d6be8ae094340acbe17d0943794bac930e48f17e0b5905006c969a35142e" Dec 10 12:36:14 crc kubenswrapper[4689]: I1210 12:36:14.202428 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-9zldg" event={"ID":"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43","Type":"ContainerStarted","Data":"d6875b4c92be5f706952ecb631d0cf0025a209345134fc1d638080f6a76ad1de"} Dec 10 12:36:14 crc kubenswrapper[4689]: I1210 12:36:14.209017 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d23c5773-27cf-4f04-9e99-4c47b1131134","Type":"ContainerStarted","Data":"c7e459ae625716a5d3cdf4c144cec98b8f55fb300d7694f20a99c96a082f539a"} Dec 10 12:36:14 crc kubenswrapper[4689]: I1210 12:36:14.210545 4689 generic.go:334] "Generic (PLEG): container finished" podID="234f8267-1974-4f9e-9d13-8a239ff2660c" containerID="604e75cab9f94d67e8f119e9ac3f5bae4190da6dadf8fd9bece4d7a84522932b" exitCode=1 Dec 10 12:36:14 crc kubenswrapper[4689]: I1210 12:36:14.210591 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-6f4566d7bf-hkx2g" event={"ID":"234f8267-1974-4f9e-9d13-8a239ff2660c","Type":"ContainerDied","Data":"604e75cab9f94d67e8f119e9ac3f5bae4190da6dadf8fd9bece4d7a84522932b"} Dec 10 12:36:14 crc kubenswrapper[4689]: I1210 12:36:14.211175 4689 scope.go:117] "RemoveContainer" containerID="604e75cab9f94d67e8f119e9ac3f5bae4190da6dadf8fd9bece4d7a84522932b" Dec 10 12:36:14 crc kubenswrapper[4689]: I1210 12:36:14.222919 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-db-sync-9zldg" podStartSLOduration=15.228481709 podStartE2EDuration="19.222874709s" podCreationTimestamp="2025-12-10 12:35:55 +0000 UTC" firstStartedPulling="2025-12-10 12:36:09.24300444 +0000 UTC m=+1237.031085578" lastFinishedPulling="2025-12-10 12:36:13.23739744 +0000 UTC m=+1241.025478578" observedRunningTime="2025-12-10 12:36:14.2192843 +0000 UTC m=+1242.007365438" watchObservedRunningTime="2025-12-10 12:36:14.222874709 +0000 UTC m=+1242.010955837" Dec 10 12:36:14 crc kubenswrapper[4689]: I1210 12:36:14.224335 4689 generic.go:334] "Generic (PLEG): container finished" podID="2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3" containerID="4a76e9db70c67b3e6099906d04a443e2f16f2cf870966ccc95dae2b62564819c" exitCode=1 Dec 10 12:36:14 crc kubenswrapper[4689]: I1210 12:36:14.225659 4689 scope.go:117] "RemoveContainer" containerID="4a76e9db70c67b3e6099906d04a443e2f16f2cf870966ccc95dae2b62564819c" Dec 10 12:36:14 crc kubenswrapper[4689]: E1210 12:36:14.225852 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-6d98945548-q82m5_openstack(2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3)\"" pod="openstack/ironic-6d98945548-q82m5" podUID="2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3" Dec 10 12:36:14 crc kubenswrapper[4689]: I1210 12:36:14.225880 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6d98945548-q82m5" event={"ID":"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3","Type":"ContainerDied","Data":"4a76e9db70c67b3e6099906d04a443e2f16f2cf870966ccc95dae2b62564819c"} Dec 10 12:36:14 crc kubenswrapper[4689]: I1210 12:36:14.225902 4689 scope.go:117] "RemoveContainer" containerID="d7a9d6be8ae094340acbe17d0943794bac930e48f17e0b5905006c969a35142e" Dec 10 12:36:14 crc kubenswrapper[4689]: I1210 12:36:14.579685 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:36:15 crc kubenswrapper[4689]: I1210 12:36:15.239760 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-6f4566d7bf-hkx2g" event={"ID":"234f8267-1974-4f9e-9d13-8a239ff2660c","Type":"ContainerStarted","Data":"7ccfc65bb3226c03c022b28191c809b15870a9f345d62487e40f648b04f7f62f"} Dec 10 12:36:15 crc kubenswrapper[4689]: I1210 12:36:15.242958 4689 scope.go:117] "RemoveContainer" containerID="4a76e9db70c67b3e6099906d04a443e2f16f2cf870966ccc95dae2b62564819c" Dec 10 12:36:15 crc kubenswrapper[4689]: E1210 12:36:15.243192 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-6d98945548-q82m5_openstack(2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3)\"" pod="openstack/ironic-6d98945548-q82m5" podUID="2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3" Dec 10 12:36:15 crc kubenswrapper[4689]: I1210 12:36:15.543295 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-799574f579-slqvw" Dec 10 12:36:15 crc kubenswrapper[4689]: I1210 12:36:15.543791 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-799574f579-slqvw" Dec 10 12:36:15 crc kubenswrapper[4689]: I1210 12:36:15.878705 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-dbcd8ff8b-j52fs" Dec 10 12:36:15 crc kubenswrapper[4689]: I1210 12:36:15.917741 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-6f4566d7bf-hkx2g" Dec 10 12:36:15 crc kubenswrapper[4689]: I1210 12:36:15.917836 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-6f4566d7bf-hkx2g" Dec 10 12:36:15 crc kubenswrapper[4689]: I1210 12:36:15.967895 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-6d98945548-q82m5"] Dec 10 12:36:16 crc kubenswrapper[4689]: I1210 12:36:16.184925 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-6d98945548-q82m5" Dec 10 12:36:16 crc kubenswrapper[4689]: I1210 12:36:16.185004 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-6d98945548-q82m5" Dec 10 12:36:16 crc kubenswrapper[4689]: I1210 12:36:16.250094 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-6d98945548-q82m5" podUID="2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3" containerName="ironic-api-log" containerID="cri-o://86c5e876556c41b0fb5e211ca25ebcfa34bb5bb7c5f211b9e610fdcbd0e2b962" gracePeriod=60 Dec 10 12:36:16 crc kubenswrapper[4689]: I1210 12:36:16.251008 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-6f4566d7bf-hkx2g" Dec 10 12:36:17 crc kubenswrapper[4689]: I1210 12:36:17.266711 4689 generic.go:334] "Generic (PLEG): container finished" podID="2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3" containerID="86c5e876556c41b0fb5e211ca25ebcfa34bb5bb7c5f211b9e610fdcbd0e2b962" exitCode=143 Dec 10 12:36:17 crc kubenswrapper[4689]: I1210 12:36:17.267113 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6d98945548-q82m5" event={"ID":"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3","Type":"ContainerDied","Data":"86c5e876556c41b0fb5e211ca25ebcfa34bb5bb7c5f211b9e610fdcbd0e2b962"} Dec 10 12:36:17 crc kubenswrapper[4689]: I1210 12:36:17.703488 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 10 12:36:18 crc kubenswrapper[4689]: I1210 12:36:18.280276 4689 generic.go:334] "Generic (PLEG): container finished" podID="0ebb276c-ffb4-490e-bf4b-c55c0c49aa43" containerID="d6875b4c92be5f706952ecb631d0cf0025a209345134fc1d638080f6a76ad1de" exitCode=0 Dec 10 12:36:18 crc kubenswrapper[4689]: I1210 12:36:18.280358 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-9zldg" event={"ID":"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43","Type":"ContainerDied","Data":"d6875b4c92be5f706952ecb631d0cf0025a209345134fc1d638080f6a76ad1de"} Dec 10 12:36:18 crc kubenswrapper[4689]: I1210 12:36:18.286716 4689 generic.go:334] "Generic (PLEG): container finished" podID="234f8267-1974-4f9e-9d13-8a239ff2660c" containerID="7ccfc65bb3226c03c022b28191c809b15870a9f345d62487e40f648b04f7f62f" exitCode=1 Dec 10 12:36:18 crc kubenswrapper[4689]: I1210 12:36:18.286748 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-6f4566d7bf-hkx2g" event={"ID":"234f8267-1974-4f9e-9d13-8a239ff2660c","Type":"ContainerDied","Data":"7ccfc65bb3226c03c022b28191c809b15870a9f345d62487e40f648b04f7f62f"} Dec 10 12:36:18 crc kubenswrapper[4689]: I1210 12:36:18.286775 4689 scope.go:117] "RemoveContainer" containerID="604e75cab9f94d67e8f119e9ac3f5bae4190da6dadf8fd9bece4d7a84522932b" Dec 10 12:36:18 crc kubenswrapper[4689]: I1210 12:36:18.287225 4689 scope.go:117] "RemoveContainer" containerID="7ccfc65bb3226c03c022b28191c809b15870a9f345d62487e40f648b04f7f62f" Dec 10 12:36:18 crc kubenswrapper[4689]: E1210 12:36:18.287421 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-6f4566d7bf-hkx2g_openstack(234f8267-1974-4f9e-9d13-8a239ff2660c)\"" pod="openstack/ironic-neutron-agent-6f4566d7bf-hkx2g" podUID="234f8267-1974-4f9e-9d13-8a239ff2660c" Dec 10 12:36:19 crc kubenswrapper[4689]: I1210 12:36:19.305570 4689 scope.go:117] "RemoveContainer" containerID="7ccfc65bb3226c03c022b28191c809b15870a9f345d62487e40f648b04f7f62f" Dec 10 12:36:19 crc kubenswrapper[4689]: E1210 12:36:19.306779 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-6f4566d7bf-hkx2g_openstack(234f8267-1974-4f9e-9d13-8a239ff2660c)\"" pod="openstack/ironic-neutron-agent-6f4566d7bf-hkx2g" podUID="234f8267-1974-4f9e-9d13-8a239ff2660c" Dec 10 12:36:19 crc kubenswrapper[4689]: I1210 12:36:19.896762 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 10 12:36:20 crc kubenswrapper[4689]: I1210 12:36:20.917272 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-6f4566d7bf-hkx2g" Dec 10 12:36:20 crc kubenswrapper[4689]: I1210 12:36:20.918237 4689 scope.go:117] "RemoveContainer" containerID="7ccfc65bb3226c03c022b28191c809b15870a9f345d62487e40f648b04f7f62f" Dec 10 12:36:20 crc kubenswrapper[4689]: E1210 12:36:20.918439 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-6f4566d7bf-hkx2g_openstack(234f8267-1974-4f9e-9d13-8a239ff2660c)\"" pod="openstack/ironic-neutron-agent-6f4566d7bf-hkx2g" podUID="234f8267-1974-4f9e-9d13-8a239ff2660c" Dec 10 12:36:23 crc kubenswrapper[4689]: I1210 12:36:23.631078 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-9zldg" Dec 10 12:36:23 crc kubenswrapper[4689]: I1210 12:36:23.751571 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\" (UID: \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\") " Dec 10 12:36:23 crc kubenswrapper[4689]: I1210 12:36:23.751643 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-config\") pod \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\" (UID: \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\") " Dec 10 12:36:23 crc kubenswrapper[4689]: I1210 12:36:23.751712 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-combined-ca-bundle\") pod \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\" (UID: \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\") " Dec 10 12:36:23 crc kubenswrapper[4689]: I1210 12:36:23.751820 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtct5\" (UniqueName: \"kubernetes.io/projected/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-kube-api-access-qtct5\") pod \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\" (UID: \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\") " Dec 10 12:36:23 crc kubenswrapper[4689]: I1210 12:36:23.751891 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-scripts\") pod \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\" (UID: \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\") " Dec 10 12:36:23 crc kubenswrapper[4689]: I1210 12:36:23.751919 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-etc-podinfo\") pod \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\" (UID: \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\") " Dec 10 12:36:23 crc kubenswrapper[4689]: I1210 12:36:23.751942 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-var-lib-ironic\") pod \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\" (UID: \"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43\") " Dec 10 12:36:23 crc kubenswrapper[4689]: I1210 12:36:23.752752 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "0ebb276c-ffb4-490e-bf4b-c55c0c49aa43" (UID: "0ebb276c-ffb4-490e-bf4b-c55c0c49aa43"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:36:23 crc kubenswrapper[4689]: I1210 12:36:23.752960 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "0ebb276c-ffb4-490e-bf4b-c55c0c49aa43" (UID: "0ebb276c-ffb4-490e-bf4b-c55c0c49aa43"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:36:23 crc kubenswrapper[4689]: I1210 12:36:23.758558 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-kube-api-access-qtct5" (OuterVolumeSpecName: "kube-api-access-qtct5") pod "0ebb276c-ffb4-490e-bf4b-c55c0c49aa43" (UID: "0ebb276c-ffb4-490e-bf4b-c55c0c49aa43"). InnerVolumeSpecName "kube-api-access-qtct5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:36:23 crc kubenswrapper[4689]: I1210 12:36:23.765287 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-scripts" (OuterVolumeSpecName: "scripts") pod "0ebb276c-ffb4-490e-bf4b-c55c0c49aa43" (UID: "0ebb276c-ffb4-490e-bf4b-c55c0c49aa43"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:23 crc kubenswrapper[4689]: I1210 12:36:23.792201 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "0ebb276c-ffb4-490e-bf4b-c55c0c49aa43" (UID: "0ebb276c-ffb4-490e-bf4b-c55c0c49aa43"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 10 12:36:23 crc kubenswrapper[4689]: I1210 12:36:23.799722 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-config" (OuterVolumeSpecName: "config") pod "0ebb276c-ffb4-490e-bf4b-c55c0c49aa43" (UID: "0ebb276c-ffb4-490e-bf4b-c55c0c49aa43"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:23 crc kubenswrapper[4689]: I1210 12:36:23.837872 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ebb276c-ffb4-490e-bf4b-c55c0c49aa43" (UID: "0ebb276c-ffb4-490e-bf4b-c55c0c49aa43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:23 crc kubenswrapper[4689]: I1210 12:36:23.861752 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtct5\" (UniqueName: \"kubernetes.io/projected/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-kube-api-access-qtct5\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:23 crc kubenswrapper[4689]: I1210 12:36:23.865465 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:23 crc kubenswrapper[4689]: I1210 12:36:23.865517 4689 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-etc-podinfo\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:23 crc kubenswrapper[4689]: I1210 12:36:23.865529 4689 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-var-lib-ironic\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:23 crc kubenswrapper[4689]: I1210 12:36:23.865539 4689 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:23 crc kubenswrapper[4689]: I1210 12:36:23.865553 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:23 crc kubenswrapper[4689]: I1210 12:36:23.865591 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ebb276c-ffb4-490e-bf4b-c55c0c49aa43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:23 crc kubenswrapper[4689]: I1210 12:36:23.984237 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-6d98945548-q82m5" Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.068923 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-config-data-custom\") pod \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\" (UID: \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\") " Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.069256 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-config-data-merged\") pod \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\" (UID: \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\") " Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.069307 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-config-data\") pod \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\" (UID: \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\") " Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.069387 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c52zm\" (UniqueName: \"kubernetes.io/projected/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-kube-api-access-c52zm\") pod \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\" (UID: \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\") " Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.069450 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-etc-podinfo\") pod \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\" (UID: \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\") " Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.069552 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-logs\") pod \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\" (UID: \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\") " Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.069582 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-combined-ca-bundle\") pod \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\" (UID: \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\") " Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.069652 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-scripts\") pod \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\" (UID: \"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3\") " Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.071237 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3" (UID: "2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.071853 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-logs" (OuterVolumeSpecName: "logs") pod "2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3" (UID: "2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.078056 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3" (UID: "2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.079836 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-scripts" (OuterVolumeSpecName: "scripts") pod "2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3" (UID: "2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.080000 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3" (UID: "2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.084263 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-kube-api-access-c52zm" (OuterVolumeSpecName: "kube-api-access-c52zm") pod "2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3" (UID: "2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3"). InnerVolumeSpecName "kube-api-access-c52zm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.158063 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-config-data" (OuterVolumeSpecName: "config-data") pod "2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3" (UID: "2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.172886 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c52zm\" (UniqueName: \"kubernetes.io/projected/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-kube-api-access-c52zm\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.172998 4689 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-etc-podinfo\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.173014 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-logs\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.173026 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.173038 4689 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.173049 4689 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-config-data-merged\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.173060 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.181267 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3" (UID: "2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.275066 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.364010 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d23c5773-27cf-4f04-9e99-4c47b1131134","Type":"ContainerStarted","Data":"b9f12ea141af42c5d3306b4179960c35791dab9312a271097372c0b0a7ecbad2"} Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.374552 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6d98945548-q82m5" event={"ID":"2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3","Type":"ContainerDied","Data":"47021a7e07c2d775080d77f7324cbf146810931f0d082bc250f7f5d65e16bb2e"} Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.374615 4689 scope.go:117] "RemoveContainer" containerID="4a76e9db70c67b3e6099906d04a443e2f16f2cf870966ccc95dae2b62564819c" Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.374733 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-6d98945548-q82m5" Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.386861 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9ba6aa57-fcd5-4e81-aeec-18115df06abb","Type":"ContainerStarted","Data":"c1c95f28e6e32dbef57c31fd116c63941432f7248e9d2974176ffd679cd32ce6"} Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.392213 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-9zldg" event={"ID":"0ebb276c-ffb4-490e-bf4b-c55c0c49aa43","Type":"ContainerDied","Data":"ddf542afc3a7bcc1ba66e53a8ccbb344866e6347bd910a195a8cab65caf115c8"} Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.392253 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddf542afc3a7bcc1ba66e53a8ccbb344866e6347bd910a195a8cab65caf115c8" Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.392351 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-9zldg" Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.407121 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"23e46f1d-5919-4baa-aeef-1364104b63fb","Type":"ContainerStarted","Data":"b04208f517e5c9861bf9e5bdcce642d78def43002ef512f081de4332500ba158"} Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.427135 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=9.2087134 podStartE2EDuration="23.427117556s" podCreationTimestamp="2025-12-10 12:36:01 +0000 UTC" firstStartedPulling="2025-12-10 12:36:09.621798929 +0000 UTC m=+1237.409880067" lastFinishedPulling="2025-12-10 12:36:23.840203085 +0000 UTC m=+1251.628284223" observedRunningTime="2025-12-10 12:36:24.405877911 +0000 UTC m=+1252.193959049" watchObservedRunningTime="2025-12-10 12:36:24.427117556 +0000 UTC m=+1252.215198704" Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.451868 4689 scope.go:117] "RemoveContainer" containerID="86c5e876556c41b0fb5e211ca25ebcfa34bb5bb7c5f211b9e610fdcbd0e2b962" Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.457961 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-6d98945548-q82m5"] Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.467509 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-6d98945548-q82m5"] Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.490072 4689 scope.go:117] "RemoveContainer" containerID="15ed75b995654c29cca38d687dc7faaf5d495d0d44110cecb9d87ce282a9be40" Dec 10 12:36:24 crc kubenswrapper[4689]: I1210 12:36:24.511704 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3" path="/var/lib/kubelet/pods/2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3/volumes" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.012796 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Dec 10 12:36:26 crc kubenswrapper[4689]: E1210 12:36:26.013529 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ebb276c-ffb4-490e-bf4b-c55c0c49aa43" containerName="ironic-inspector-db-sync" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.013549 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ebb276c-ffb4-490e-bf4b-c55c0c49aa43" containerName="ironic-inspector-db-sync" Dec 10 12:36:26 crc kubenswrapper[4689]: E1210 12:36:26.013560 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3" containerName="ironic-api-log" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.013566 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3" containerName="ironic-api-log" Dec 10 12:36:26 crc kubenswrapper[4689]: E1210 12:36:26.013598 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3" containerName="ironic-api" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.013606 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3" containerName="ironic-api" Dec 10 12:36:26 crc kubenswrapper[4689]: E1210 12:36:26.013618 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3" containerName="init" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.013624 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3" containerName="init" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.013821 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3" containerName="ironic-api" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.013848 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3" containerName="ironic-api-log" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.013866 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ebb276c-ffb4-490e-bf4b-c55c0c49aa43" containerName="ironic-inspector-db-sync" Dec 10 12:36:26 crc kubenswrapper[4689]: E1210 12:36:26.014127 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3" containerName="ironic-api" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.014140 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3" containerName="ironic-api" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.014378 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d1a4d1f-bb78-41c4-8f64-46f7f8f03df3" containerName="ironic-api" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.016753 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.021906 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.022277 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.054078 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.111857 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7841c545-a50f-4add-8f2f-0ab8310938af-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"7841c545-a50f-4add-8f2f-0ab8310938af\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.111932 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7841c545-a50f-4add-8f2f-0ab8310938af-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"7841c545-a50f-4add-8f2f-0ab8310938af\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.112063 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm4nf\" (UniqueName: \"kubernetes.io/projected/7841c545-a50f-4add-8f2f-0ab8310938af-kube-api-access-hm4nf\") pod \"ironic-inspector-0\" (UID: \"7841c545-a50f-4add-8f2f-0ab8310938af\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.112160 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/7841c545-a50f-4add-8f2f-0ab8310938af-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"7841c545-a50f-4add-8f2f-0ab8310938af\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.112191 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7841c545-a50f-4add-8f2f-0ab8310938af-scripts\") pod \"ironic-inspector-0\" (UID: \"7841c545-a50f-4add-8f2f-0ab8310938af\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.112311 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/7841c545-a50f-4add-8f2f-0ab8310938af-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"7841c545-a50f-4add-8f2f-0ab8310938af\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.112346 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7841c545-a50f-4add-8f2f-0ab8310938af-config\") pod \"ironic-inspector-0\" (UID: \"7841c545-a50f-4add-8f2f-0ab8310938af\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.233820 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/7841c545-a50f-4add-8f2f-0ab8310938af-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"7841c545-a50f-4add-8f2f-0ab8310938af\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.233874 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7841c545-a50f-4add-8f2f-0ab8310938af-config\") pod \"ironic-inspector-0\" (UID: \"7841c545-a50f-4add-8f2f-0ab8310938af\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.233900 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7841c545-a50f-4add-8f2f-0ab8310938af-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"7841c545-a50f-4add-8f2f-0ab8310938af\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.233954 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7841c545-a50f-4add-8f2f-0ab8310938af-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"7841c545-a50f-4add-8f2f-0ab8310938af\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.234012 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm4nf\" (UniqueName: \"kubernetes.io/projected/7841c545-a50f-4add-8f2f-0ab8310938af-kube-api-access-hm4nf\") pod \"ironic-inspector-0\" (UID: \"7841c545-a50f-4add-8f2f-0ab8310938af\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.234067 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/7841c545-a50f-4add-8f2f-0ab8310938af-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"7841c545-a50f-4add-8f2f-0ab8310938af\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.234086 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7841c545-a50f-4add-8f2f-0ab8310938af-scripts\") pod \"ironic-inspector-0\" (UID: \"7841c545-a50f-4add-8f2f-0ab8310938af\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.234346 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/7841c545-a50f-4add-8f2f-0ab8310938af-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"7841c545-a50f-4add-8f2f-0ab8310938af\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.235102 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/7841c545-a50f-4add-8f2f-0ab8310938af-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"7841c545-a50f-4add-8f2f-0ab8310938af\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.238618 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7841c545-a50f-4add-8f2f-0ab8310938af-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"7841c545-a50f-4add-8f2f-0ab8310938af\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.241729 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7841c545-a50f-4add-8f2f-0ab8310938af-scripts\") pod \"ironic-inspector-0\" (UID: \"7841c545-a50f-4add-8f2f-0ab8310938af\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.244052 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7841c545-a50f-4add-8f2f-0ab8310938af-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"7841c545-a50f-4add-8f2f-0ab8310938af\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.247584 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7841c545-a50f-4add-8f2f-0ab8310938af-config\") pod \"ironic-inspector-0\" (UID: \"7841c545-a50f-4add-8f2f-0ab8310938af\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.251494 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm4nf\" (UniqueName: \"kubernetes.io/projected/7841c545-a50f-4add-8f2f-0ab8310938af-kube-api-access-hm4nf\") pod \"ironic-inspector-0\" (UID: \"7841c545-a50f-4add-8f2f-0ab8310938af\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.353499 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.431902 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d23c5773-27cf-4f04-9e99-4c47b1131134","Type":"ContainerStarted","Data":"fc10950449cd38a1e5409431d5554486cc8759fd4bfe2afee445026c35e310a2"} Dec 10 12:36:26 crc kubenswrapper[4689]: I1210 12:36:26.870274 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Dec 10 12:36:27 crc kubenswrapper[4689]: I1210 12:36:27.444010 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"7841c545-a50f-4add-8f2f-0ab8310938af","Type":"ContainerStarted","Data":"2989803a245d67c08a5770a96f5b1cdbaf6a640445decdb771abaedc15e4ebc6"} Dec 10 12:36:28 crc kubenswrapper[4689]: I1210 12:36:28.018113 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="3033a7b9-9374-47c4-89a2-188204ccd941" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.168:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 12:36:28 crc kubenswrapper[4689]: I1210 12:36:28.939072 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 12:36:28 crc kubenswrapper[4689]: I1210 12:36:28.940540 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c" containerName="glance-log" containerID="cri-o://04acef4caa91113cb7fa3a016eba65086b09c90d1343a0fae707547215a21338" gracePeriod=30 Dec 10 12:36:28 crc kubenswrapper[4689]: I1210 12:36:28.940587 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c" containerName="glance-httpd" containerID="cri-o://f3fc097cd14dd750789d612bb5c1f1ac70f37fcc67df585c29527ae53325bc07" gracePeriod=30 Dec 10 12:36:29 crc kubenswrapper[4689]: I1210 12:36:29.005277 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Dec 10 12:36:29 crc kubenswrapper[4689]: I1210 12:36:29.464943 4689 generic.go:334] "Generic (PLEG): container finished" podID="aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c" containerID="04acef4caa91113cb7fa3a016eba65086b09c90d1343a0fae707547215a21338" exitCode=143 Dec 10 12:36:29 crc kubenswrapper[4689]: I1210 12:36:29.465007 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c","Type":"ContainerDied","Data":"04acef4caa91113cb7fa3a016eba65086b09c90d1343a0fae707547215a21338"} Dec 10 12:36:29 crc kubenswrapper[4689]: I1210 12:36:29.467419 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d23c5773-27cf-4f04-9e99-4c47b1131134","Type":"ContainerStarted","Data":"027325026e6b43817055ad15d70760291c7ab34d431d3790e95299f76e8f03ae"} Dec 10 12:36:29 crc kubenswrapper[4689]: I1210 12:36:29.467486 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d23c5773-27cf-4f04-9e99-4c47b1131134" containerName="ceilometer-central-agent" containerID="cri-o://c7e459ae625716a5d3cdf4c144cec98b8f55fb300d7694f20a99c96a082f539a" gracePeriod=30 Dec 10 12:36:29 crc kubenswrapper[4689]: I1210 12:36:29.467572 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d23c5773-27cf-4f04-9e99-4c47b1131134" containerName="ceilometer-notification-agent" containerID="cri-o://b9f12ea141af42c5d3306b4179960c35791dab9312a271097372c0b0a7ecbad2" gracePeriod=30 Dec 10 12:36:29 crc kubenswrapper[4689]: I1210 12:36:29.467606 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d23c5773-27cf-4f04-9e99-4c47b1131134" containerName="sg-core" containerID="cri-o://fc10950449cd38a1e5409431d5554486cc8759fd4bfe2afee445026c35e310a2" gracePeriod=30 Dec 10 12:36:29 crc kubenswrapper[4689]: I1210 12:36:29.467641 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 10 12:36:29 crc kubenswrapper[4689]: I1210 12:36:29.467661 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d23c5773-27cf-4f04-9e99-4c47b1131134" containerName="proxy-httpd" containerID="cri-o://027325026e6b43817055ad15d70760291c7ab34d431d3790e95299f76e8f03ae" gracePeriod=30 Dec 10 12:36:29 crc kubenswrapper[4689]: I1210 12:36:29.473756 4689 generic.go:334] "Generic (PLEG): container finished" podID="7841c545-a50f-4add-8f2f-0ab8310938af" containerID="f5b3f5d7e0db2e3809ac1a54515b44c30d5d3c90881fbc6488396050dd103d85" exitCode=0 Dec 10 12:36:29 crc kubenswrapper[4689]: I1210 12:36:29.473818 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"7841c545-a50f-4add-8f2f-0ab8310938af","Type":"ContainerDied","Data":"f5b3f5d7e0db2e3809ac1a54515b44c30d5d3c90881fbc6488396050dd103d85"} Dec 10 12:36:29 crc kubenswrapper[4689]: I1210 12:36:29.477407 4689 generic.go:334] "Generic (PLEG): container finished" podID="23e46f1d-5919-4baa-aeef-1364104b63fb" containerID="b04208f517e5c9861bf9e5bdcce642d78def43002ef512f081de4332500ba158" exitCode=0 Dec 10 12:36:29 crc kubenswrapper[4689]: I1210 12:36:29.477438 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"23e46f1d-5919-4baa-aeef-1364104b63fb","Type":"ContainerDied","Data":"b04208f517e5c9861bf9e5bdcce642d78def43002ef512f081de4332500ba158"} Dec 10 12:36:29 crc kubenswrapper[4689]: I1210 12:36:29.496939 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.216250211 podStartE2EDuration="19.496921476s" podCreationTimestamp="2025-12-10 12:36:10 +0000 UTC" firstStartedPulling="2025-12-10 12:36:11.251441923 +0000 UTC m=+1239.039523061" lastFinishedPulling="2025-12-10 12:36:28.532113188 +0000 UTC m=+1256.320194326" observedRunningTime="2025-12-10 12:36:29.490529758 +0000 UTC m=+1257.278610896" watchObservedRunningTime="2025-12-10 12:36:29.496921476 +0000 UTC m=+1257.285002604" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.045480 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.046008 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="cf5e0487-a380-424f-aa29-f815b50550db" containerName="glance-log" containerID="cri-o://2a55af2cef7a7e1d0e6e21d2022cf74765424cde599a0aa0a053d82425f86759" gracePeriod=30 Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.046488 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="cf5e0487-a380-424f-aa29-f815b50550db" containerName="glance-httpd" containerID="cri-o://eba3388b6217b672df9bdd08c3a6c98eb3af7448cf48f3d2ee833c4519993a83" gracePeriod=30 Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.428817 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.498873 4689 generic.go:334] "Generic (PLEG): container finished" podID="d23c5773-27cf-4f04-9e99-4c47b1131134" containerID="027325026e6b43817055ad15d70760291c7ab34d431d3790e95299f76e8f03ae" exitCode=0 Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.498902 4689 generic.go:334] "Generic (PLEG): container finished" podID="d23c5773-27cf-4f04-9e99-4c47b1131134" containerID="fc10950449cd38a1e5409431d5554486cc8759fd4bfe2afee445026c35e310a2" exitCode=2 Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.498909 4689 generic.go:334] "Generic (PLEG): container finished" podID="d23c5773-27cf-4f04-9e99-4c47b1131134" containerID="b9f12ea141af42c5d3306b4179960c35791dab9312a271097372c0b0a7ecbad2" exitCode=0 Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.498917 4689 generic.go:334] "Generic (PLEG): container finished" podID="d23c5773-27cf-4f04-9e99-4c47b1131134" containerID="c7e459ae625716a5d3cdf4c144cec98b8f55fb300d7694f20a99c96a082f539a" exitCode=0 Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.499056 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.506258 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d23c5773-27cf-4f04-9e99-4c47b1131134-combined-ca-bundle\") pod \"d23c5773-27cf-4f04-9e99-4c47b1131134\" (UID: \"d23c5773-27cf-4f04-9e99-4c47b1131134\") " Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.506391 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hjj2\" (UniqueName: \"kubernetes.io/projected/d23c5773-27cf-4f04-9e99-4c47b1131134-kube-api-access-8hjj2\") pod \"d23c5773-27cf-4f04-9e99-4c47b1131134\" (UID: \"d23c5773-27cf-4f04-9e99-4c47b1131134\") " Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.506436 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d23c5773-27cf-4f04-9e99-4c47b1131134-run-httpd\") pod \"d23c5773-27cf-4f04-9e99-4c47b1131134\" (UID: \"d23c5773-27cf-4f04-9e99-4c47b1131134\") " Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.506510 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d23c5773-27cf-4f04-9e99-4c47b1131134-config-data\") pod \"d23c5773-27cf-4f04-9e99-4c47b1131134\" (UID: \"d23c5773-27cf-4f04-9e99-4c47b1131134\") " Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.506536 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d23c5773-27cf-4f04-9e99-4c47b1131134-sg-core-conf-yaml\") pod \"d23c5773-27cf-4f04-9e99-4c47b1131134\" (UID: \"d23c5773-27cf-4f04-9e99-4c47b1131134\") " Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.506580 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d23c5773-27cf-4f04-9e99-4c47b1131134-log-httpd\") pod \"d23c5773-27cf-4f04-9e99-4c47b1131134\" (UID: \"d23c5773-27cf-4f04-9e99-4c47b1131134\") " Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.506668 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d23c5773-27cf-4f04-9e99-4c47b1131134-scripts\") pod \"d23c5773-27cf-4f04-9e99-4c47b1131134\" (UID: \"d23c5773-27cf-4f04-9e99-4c47b1131134\") " Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.506703 4689 generic.go:334] "Generic (PLEG): container finished" podID="cf5e0487-a380-424f-aa29-f815b50550db" containerID="2a55af2cef7a7e1d0e6e21d2022cf74765424cde599a0aa0a053d82425f86759" exitCode=143 Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.514565 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d23c5773-27cf-4f04-9e99-4c47b1131134-scripts" (OuterVolumeSpecName: "scripts") pod "d23c5773-27cf-4f04-9e99-4c47b1131134" (UID: "d23c5773-27cf-4f04-9e99-4c47b1131134"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.515185 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d23c5773-27cf-4f04-9e99-4c47b1131134-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d23c5773-27cf-4f04-9e99-4c47b1131134" (UID: "d23c5773-27cf-4f04-9e99-4c47b1131134"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.515285 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d23c5773-27cf-4f04-9e99-4c47b1131134-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d23c5773-27cf-4f04-9e99-4c47b1131134" (UID: "d23c5773-27cf-4f04-9e99-4c47b1131134"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.518870 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d23c5773-27cf-4f04-9e99-4c47b1131134-kube-api-access-8hjj2" (OuterVolumeSpecName: "kube-api-access-8hjj2") pod "d23c5773-27cf-4f04-9e99-4c47b1131134" (UID: "d23c5773-27cf-4f04-9e99-4c47b1131134"). InnerVolumeSpecName "kube-api-access-8hjj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.523030 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d23c5773-27cf-4f04-9e99-4c47b1131134","Type":"ContainerDied","Data":"027325026e6b43817055ad15d70760291c7ab34d431d3790e95299f76e8f03ae"} Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.523080 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d23c5773-27cf-4f04-9e99-4c47b1131134","Type":"ContainerDied","Data":"fc10950449cd38a1e5409431d5554486cc8759fd4bfe2afee445026c35e310a2"} Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.523090 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d23c5773-27cf-4f04-9e99-4c47b1131134","Type":"ContainerDied","Data":"b9f12ea141af42c5d3306b4179960c35791dab9312a271097372c0b0a7ecbad2"} Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.523099 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d23c5773-27cf-4f04-9e99-4c47b1131134","Type":"ContainerDied","Data":"c7e459ae625716a5d3cdf4c144cec98b8f55fb300d7694f20a99c96a082f539a"} Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.523108 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d23c5773-27cf-4f04-9e99-4c47b1131134","Type":"ContainerDied","Data":"853708b40cfed23f2d2ec88357783e81357400b9f5ffb12af015fbe75df5240c"} Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.523122 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cf5e0487-a380-424f-aa29-f815b50550db","Type":"ContainerDied","Data":"2a55af2cef7a7e1d0e6e21d2022cf74765424cde599a0aa0a053d82425f86759"} Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.523142 4689 scope.go:117] "RemoveContainer" containerID="027325026e6b43817055ad15d70760291c7ab34d431d3790e95299f76e8f03ae" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.553682 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d23c5773-27cf-4f04-9e99-4c47b1131134-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d23c5773-27cf-4f04-9e99-4c47b1131134" (UID: "d23c5773-27cf-4f04-9e99-4c47b1131134"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.564387 4689 scope.go:117] "RemoveContainer" containerID="fc10950449cd38a1e5409431d5554486cc8759fd4bfe2afee445026c35e310a2" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.596096 4689 scope.go:117] "RemoveContainer" containerID="b9f12ea141af42c5d3306b4179960c35791dab9312a271097372c0b0a7ecbad2" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.604137 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d23c5773-27cf-4f04-9e99-4c47b1131134-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d23c5773-27cf-4f04-9e99-4c47b1131134" (UID: "d23c5773-27cf-4f04-9e99-4c47b1131134"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.608936 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hjj2\" (UniqueName: \"kubernetes.io/projected/d23c5773-27cf-4f04-9e99-4c47b1131134-kube-api-access-8hjj2\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.608959 4689 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d23c5773-27cf-4f04-9e99-4c47b1131134-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.608979 4689 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d23c5773-27cf-4f04-9e99-4c47b1131134-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.608988 4689 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d23c5773-27cf-4f04-9e99-4c47b1131134-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.608997 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d23c5773-27cf-4f04-9e99-4c47b1131134-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.609007 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d23c5773-27cf-4f04-9e99-4c47b1131134-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.657952 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d23c5773-27cf-4f04-9e99-4c47b1131134-config-data" (OuterVolumeSpecName: "config-data") pod "d23c5773-27cf-4f04-9e99-4c47b1131134" (UID: "d23c5773-27cf-4f04-9e99-4c47b1131134"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.711006 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d23c5773-27cf-4f04-9e99-4c47b1131134-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.720389 4689 scope.go:117] "RemoveContainer" containerID="c7e459ae625716a5d3cdf4c144cec98b8f55fb300d7694f20a99c96a082f539a" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.749134 4689 scope.go:117] "RemoveContainer" containerID="027325026e6b43817055ad15d70760291c7ab34d431d3790e95299f76e8f03ae" Dec 10 12:36:30 crc kubenswrapper[4689]: E1210 12:36:30.749552 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"027325026e6b43817055ad15d70760291c7ab34d431d3790e95299f76e8f03ae\": container with ID starting with 027325026e6b43817055ad15d70760291c7ab34d431d3790e95299f76e8f03ae not found: ID does not exist" containerID="027325026e6b43817055ad15d70760291c7ab34d431d3790e95299f76e8f03ae" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.749583 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"027325026e6b43817055ad15d70760291c7ab34d431d3790e95299f76e8f03ae"} err="failed to get container status \"027325026e6b43817055ad15d70760291c7ab34d431d3790e95299f76e8f03ae\": rpc error: code = NotFound desc = could not find container \"027325026e6b43817055ad15d70760291c7ab34d431d3790e95299f76e8f03ae\": container with ID starting with 027325026e6b43817055ad15d70760291c7ab34d431d3790e95299f76e8f03ae not found: ID does not exist" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.749601 4689 scope.go:117] "RemoveContainer" containerID="fc10950449cd38a1e5409431d5554486cc8759fd4bfe2afee445026c35e310a2" Dec 10 12:36:30 crc kubenswrapper[4689]: E1210 12:36:30.751629 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc10950449cd38a1e5409431d5554486cc8759fd4bfe2afee445026c35e310a2\": container with ID starting with fc10950449cd38a1e5409431d5554486cc8759fd4bfe2afee445026c35e310a2 not found: ID does not exist" containerID="fc10950449cd38a1e5409431d5554486cc8759fd4bfe2afee445026c35e310a2" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.751802 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc10950449cd38a1e5409431d5554486cc8759fd4bfe2afee445026c35e310a2"} err="failed to get container status \"fc10950449cd38a1e5409431d5554486cc8759fd4bfe2afee445026c35e310a2\": rpc error: code = NotFound desc = could not find container \"fc10950449cd38a1e5409431d5554486cc8759fd4bfe2afee445026c35e310a2\": container with ID starting with fc10950449cd38a1e5409431d5554486cc8759fd4bfe2afee445026c35e310a2 not found: ID does not exist" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.752557 4689 scope.go:117] "RemoveContainer" containerID="b9f12ea141af42c5d3306b4179960c35791dab9312a271097372c0b0a7ecbad2" Dec 10 12:36:30 crc kubenswrapper[4689]: E1210 12:36:30.753193 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9f12ea141af42c5d3306b4179960c35791dab9312a271097372c0b0a7ecbad2\": container with ID starting with b9f12ea141af42c5d3306b4179960c35791dab9312a271097372c0b0a7ecbad2 not found: ID does not exist" containerID="b9f12ea141af42c5d3306b4179960c35791dab9312a271097372c0b0a7ecbad2" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.753222 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9f12ea141af42c5d3306b4179960c35791dab9312a271097372c0b0a7ecbad2"} err="failed to get container status \"b9f12ea141af42c5d3306b4179960c35791dab9312a271097372c0b0a7ecbad2\": rpc error: code = NotFound desc = could not find container \"b9f12ea141af42c5d3306b4179960c35791dab9312a271097372c0b0a7ecbad2\": container with ID starting with b9f12ea141af42c5d3306b4179960c35791dab9312a271097372c0b0a7ecbad2 not found: ID does not exist" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.753239 4689 scope.go:117] "RemoveContainer" containerID="c7e459ae625716a5d3cdf4c144cec98b8f55fb300d7694f20a99c96a082f539a" Dec 10 12:36:30 crc kubenswrapper[4689]: E1210 12:36:30.754222 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7e459ae625716a5d3cdf4c144cec98b8f55fb300d7694f20a99c96a082f539a\": container with ID starting with c7e459ae625716a5d3cdf4c144cec98b8f55fb300d7694f20a99c96a082f539a not found: ID does not exist" containerID="c7e459ae625716a5d3cdf4c144cec98b8f55fb300d7694f20a99c96a082f539a" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.754273 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7e459ae625716a5d3cdf4c144cec98b8f55fb300d7694f20a99c96a082f539a"} err="failed to get container status \"c7e459ae625716a5d3cdf4c144cec98b8f55fb300d7694f20a99c96a082f539a\": rpc error: code = NotFound desc = could not find container \"c7e459ae625716a5d3cdf4c144cec98b8f55fb300d7694f20a99c96a082f539a\": container with ID starting with c7e459ae625716a5d3cdf4c144cec98b8f55fb300d7694f20a99c96a082f539a not found: ID does not exist" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.754291 4689 scope.go:117] "RemoveContainer" containerID="027325026e6b43817055ad15d70760291c7ab34d431d3790e95299f76e8f03ae" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.754667 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"027325026e6b43817055ad15d70760291c7ab34d431d3790e95299f76e8f03ae"} err="failed to get container status \"027325026e6b43817055ad15d70760291c7ab34d431d3790e95299f76e8f03ae\": rpc error: code = NotFound desc = could not find container \"027325026e6b43817055ad15d70760291c7ab34d431d3790e95299f76e8f03ae\": container with ID starting with 027325026e6b43817055ad15d70760291c7ab34d431d3790e95299f76e8f03ae not found: ID does not exist" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.754714 4689 scope.go:117] "RemoveContainer" containerID="fc10950449cd38a1e5409431d5554486cc8759fd4bfe2afee445026c35e310a2" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.755361 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc10950449cd38a1e5409431d5554486cc8759fd4bfe2afee445026c35e310a2"} err="failed to get container status \"fc10950449cd38a1e5409431d5554486cc8759fd4bfe2afee445026c35e310a2\": rpc error: code = NotFound desc = could not find container \"fc10950449cd38a1e5409431d5554486cc8759fd4bfe2afee445026c35e310a2\": container with ID starting with fc10950449cd38a1e5409431d5554486cc8759fd4bfe2afee445026c35e310a2 not found: ID does not exist" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.755404 4689 scope.go:117] "RemoveContainer" containerID="b9f12ea141af42c5d3306b4179960c35791dab9312a271097372c0b0a7ecbad2" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.756671 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9f12ea141af42c5d3306b4179960c35791dab9312a271097372c0b0a7ecbad2"} err="failed to get container status \"b9f12ea141af42c5d3306b4179960c35791dab9312a271097372c0b0a7ecbad2\": rpc error: code = NotFound desc = could not find container \"b9f12ea141af42c5d3306b4179960c35791dab9312a271097372c0b0a7ecbad2\": container with ID starting with b9f12ea141af42c5d3306b4179960c35791dab9312a271097372c0b0a7ecbad2 not found: ID does not exist" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.756697 4689 scope.go:117] "RemoveContainer" containerID="c7e459ae625716a5d3cdf4c144cec98b8f55fb300d7694f20a99c96a082f539a" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.757339 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7e459ae625716a5d3cdf4c144cec98b8f55fb300d7694f20a99c96a082f539a"} err="failed to get container status \"c7e459ae625716a5d3cdf4c144cec98b8f55fb300d7694f20a99c96a082f539a\": rpc error: code = NotFound desc = could not find container \"c7e459ae625716a5d3cdf4c144cec98b8f55fb300d7694f20a99c96a082f539a\": container with ID starting with c7e459ae625716a5d3cdf4c144cec98b8f55fb300d7694f20a99c96a082f539a not found: ID does not exist" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.757361 4689 scope.go:117] "RemoveContainer" containerID="027325026e6b43817055ad15d70760291c7ab34d431d3790e95299f76e8f03ae" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.757712 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"027325026e6b43817055ad15d70760291c7ab34d431d3790e95299f76e8f03ae"} err="failed to get container status \"027325026e6b43817055ad15d70760291c7ab34d431d3790e95299f76e8f03ae\": rpc error: code = NotFound desc = could not find container \"027325026e6b43817055ad15d70760291c7ab34d431d3790e95299f76e8f03ae\": container with ID starting with 027325026e6b43817055ad15d70760291c7ab34d431d3790e95299f76e8f03ae not found: ID does not exist" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.757736 4689 scope.go:117] "RemoveContainer" containerID="fc10950449cd38a1e5409431d5554486cc8759fd4bfe2afee445026c35e310a2" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.758084 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc10950449cd38a1e5409431d5554486cc8759fd4bfe2afee445026c35e310a2"} err="failed to get container status \"fc10950449cd38a1e5409431d5554486cc8759fd4bfe2afee445026c35e310a2\": rpc error: code = NotFound desc = could not find container \"fc10950449cd38a1e5409431d5554486cc8759fd4bfe2afee445026c35e310a2\": container with ID starting with fc10950449cd38a1e5409431d5554486cc8759fd4bfe2afee445026c35e310a2 not found: ID does not exist" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.758104 4689 scope.go:117] "RemoveContainer" containerID="b9f12ea141af42c5d3306b4179960c35791dab9312a271097372c0b0a7ecbad2" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.758450 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9f12ea141af42c5d3306b4179960c35791dab9312a271097372c0b0a7ecbad2"} err="failed to get container status \"b9f12ea141af42c5d3306b4179960c35791dab9312a271097372c0b0a7ecbad2\": rpc error: code = NotFound desc = could not find container \"b9f12ea141af42c5d3306b4179960c35791dab9312a271097372c0b0a7ecbad2\": container with ID starting with b9f12ea141af42c5d3306b4179960c35791dab9312a271097372c0b0a7ecbad2 not found: ID does not exist" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.758485 4689 scope.go:117] "RemoveContainer" containerID="c7e459ae625716a5d3cdf4c144cec98b8f55fb300d7694f20a99c96a082f539a" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.758759 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7e459ae625716a5d3cdf4c144cec98b8f55fb300d7694f20a99c96a082f539a"} err="failed to get container status \"c7e459ae625716a5d3cdf4c144cec98b8f55fb300d7694f20a99c96a082f539a\": rpc error: code = NotFound desc = could not find container \"c7e459ae625716a5d3cdf4c144cec98b8f55fb300d7694f20a99c96a082f539a\": container with ID starting with c7e459ae625716a5d3cdf4c144cec98b8f55fb300d7694f20a99c96a082f539a not found: ID does not exist" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.758785 4689 scope.go:117] "RemoveContainer" containerID="027325026e6b43817055ad15d70760291c7ab34d431d3790e95299f76e8f03ae" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.760338 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"027325026e6b43817055ad15d70760291c7ab34d431d3790e95299f76e8f03ae"} err="failed to get container status \"027325026e6b43817055ad15d70760291c7ab34d431d3790e95299f76e8f03ae\": rpc error: code = NotFound desc = could not find container \"027325026e6b43817055ad15d70760291c7ab34d431d3790e95299f76e8f03ae\": container with ID starting with 027325026e6b43817055ad15d70760291c7ab34d431d3790e95299f76e8f03ae not found: ID does not exist" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.760383 4689 scope.go:117] "RemoveContainer" containerID="fc10950449cd38a1e5409431d5554486cc8759fd4bfe2afee445026c35e310a2" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.763350 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc10950449cd38a1e5409431d5554486cc8759fd4bfe2afee445026c35e310a2"} err="failed to get container status \"fc10950449cd38a1e5409431d5554486cc8759fd4bfe2afee445026c35e310a2\": rpc error: code = NotFound desc = could not find container \"fc10950449cd38a1e5409431d5554486cc8759fd4bfe2afee445026c35e310a2\": container with ID starting with fc10950449cd38a1e5409431d5554486cc8759fd4bfe2afee445026c35e310a2 not found: ID does not exist" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.763383 4689 scope.go:117] "RemoveContainer" containerID="b9f12ea141af42c5d3306b4179960c35791dab9312a271097372c0b0a7ecbad2" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.765783 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9f12ea141af42c5d3306b4179960c35791dab9312a271097372c0b0a7ecbad2"} err="failed to get container status \"b9f12ea141af42c5d3306b4179960c35791dab9312a271097372c0b0a7ecbad2\": rpc error: code = NotFound desc = could not find container \"b9f12ea141af42c5d3306b4179960c35791dab9312a271097372c0b0a7ecbad2\": container with ID starting with b9f12ea141af42c5d3306b4179960c35791dab9312a271097372c0b0a7ecbad2 not found: ID does not exist" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.765817 4689 scope.go:117] "RemoveContainer" containerID="c7e459ae625716a5d3cdf4c144cec98b8f55fb300d7694f20a99c96a082f539a" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.769435 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7e459ae625716a5d3cdf4c144cec98b8f55fb300d7694f20a99c96a082f539a"} err="failed to get container status \"c7e459ae625716a5d3cdf4c144cec98b8f55fb300d7694f20a99c96a082f539a\": rpc error: code = NotFound desc = could not find container \"c7e459ae625716a5d3cdf4c144cec98b8f55fb300d7694f20a99c96a082f539a\": container with ID starting with c7e459ae625716a5d3cdf4c144cec98b8f55fb300d7694f20a99c96a082f539a not found: ID does not exist" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.830785 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.848429 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.856099 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:36:30 crc kubenswrapper[4689]: E1210 12:36:30.856586 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d23c5773-27cf-4f04-9e99-4c47b1131134" containerName="ceilometer-notification-agent" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.856601 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="d23c5773-27cf-4f04-9e99-4c47b1131134" containerName="ceilometer-notification-agent" Dec 10 12:36:30 crc kubenswrapper[4689]: E1210 12:36:30.856620 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d23c5773-27cf-4f04-9e99-4c47b1131134" containerName="proxy-httpd" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.856626 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="d23c5773-27cf-4f04-9e99-4c47b1131134" containerName="proxy-httpd" Dec 10 12:36:30 crc kubenswrapper[4689]: E1210 12:36:30.856639 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d23c5773-27cf-4f04-9e99-4c47b1131134" containerName="ceilometer-central-agent" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.856646 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="d23c5773-27cf-4f04-9e99-4c47b1131134" containerName="ceilometer-central-agent" Dec 10 12:36:30 crc kubenswrapper[4689]: E1210 12:36:30.856665 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d23c5773-27cf-4f04-9e99-4c47b1131134" containerName="sg-core" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.856671 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="d23c5773-27cf-4f04-9e99-4c47b1131134" containerName="sg-core" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.856876 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="d23c5773-27cf-4f04-9e99-4c47b1131134" containerName="ceilometer-notification-agent" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.856890 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="d23c5773-27cf-4f04-9e99-4c47b1131134" containerName="ceilometer-central-agent" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.856912 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="d23c5773-27cf-4f04-9e99-4c47b1131134" containerName="proxy-httpd" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.856925 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="d23c5773-27cf-4f04-9e99-4c47b1131134" containerName="sg-core" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.858635 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.864658 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.865055 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.868433 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.914292 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw28w\" (UniqueName: \"kubernetes.io/projected/03149d4d-fcc6-47bd-892e-91f14288acbf-kube-api-access-xw28w\") pod \"ceilometer-0\" (UID: \"03149d4d-fcc6-47bd-892e-91f14288acbf\") " pod="openstack/ceilometer-0" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.914344 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03149d4d-fcc6-47bd-892e-91f14288acbf-config-data\") pod \"ceilometer-0\" (UID: \"03149d4d-fcc6-47bd-892e-91f14288acbf\") " pod="openstack/ceilometer-0" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.914372 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03149d4d-fcc6-47bd-892e-91f14288acbf-run-httpd\") pod \"ceilometer-0\" (UID: \"03149d4d-fcc6-47bd-892e-91f14288acbf\") " pod="openstack/ceilometer-0" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.914542 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03149d4d-fcc6-47bd-892e-91f14288acbf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"03149d4d-fcc6-47bd-892e-91f14288acbf\") " pod="openstack/ceilometer-0" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.914615 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03149d4d-fcc6-47bd-892e-91f14288acbf-scripts\") pod \"ceilometer-0\" (UID: \"03149d4d-fcc6-47bd-892e-91f14288acbf\") " pod="openstack/ceilometer-0" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.914709 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03149d4d-fcc6-47bd-892e-91f14288acbf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"03149d4d-fcc6-47bd-892e-91f14288acbf\") " pod="openstack/ceilometer-0" Dec 10 12:36:30 crc kubenswrapper[4689]: I1210 12:36:30.914807 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03149d4d-fcc6-47bd-892e-91f14288acbf-log-httpd\") pod \"ceilometer-0\" (UID: \"03149d4d-fcc6-47bd-892e-91f14288acbf\") " pod="openstack/ceilometer-0" Dec 10 12:36:31 crc kubenswrapper[4689]: I1210 12:36:31.016345 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw28w\" (UniqueName: \"kubernetes.io/projected/03149d4d-fcc6-47bd-892e-91f14288acbf-kube-api-access-xw28w\") pod \"ceilometer-0\" (UID: \"03149d4d-fcc6-47bd-892e-91f14288acbf\") " pod="openstack/ceilometer-0" Dec 10 12:36:31 crc kubenswrapper[4689]: I1210 12:36:31.016401 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03149d4d-fcc6-47bd-892e-91f14288acbf-config-data\") pod \"ceilometer-0\" (UID: \"03149d4d-fcc6-47bd-892e-91f14288acbf\") " pod="openstack/ceilometer-0" Dec 10 12:36:31 crc kubenswrapper[4689]: I1210 12:36:31.016429 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03149d4d-fcc6-47bd-892e-91f14288acbf-run-httpd\") pod \"ceilometer-0\" (UID: \"03149d4d-fcc6-47bd-892e-91f14288acbf\") " pod="openstack/ceilometer-0" Dec 10 12:36:31 crc kubenswrapper[4689]: I1210 12:36:31.016467 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03149d4d-fcc6-47bd-892e-91f14288acbf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"03149d4d-fcc6-47bd-892e-91f14288acbf\") " pod="openstack/ceilometer-0" Dec 10 12:36:31 crc kubenswrapper[4689]: I1210 12:36:31.016491 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03149d4d-fcc6-47bd-892e-91f14288acbf-scripts\") pod \"ceilometer-0\" (UID: \"03149d4d-fcc6-47bd-892e-91f14288acbf\") " pod="openstack/ceilometer-0" Dec 10 12:36:31 crc kubenswrapper[4689]: I1210 12:36:31.016543 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03149d4d-fcc6-47bd-892e-91f14288acbf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"03149d4d-fcc6-47bd-892e-91f14288acbf\") " pod="openstack/ceilometer-0" Dec 10 12:36:31 crc kubenswrapper[4689]: I1210 12:36:31.016591 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03149d4d-fcc6-47bd-892e-91f14288acbf-log-httpd\") pod \"ceilometer-0\" (UID: \"03149d4d-fcc6-47bd-892e-91f14288acbf\") " pod="openstack/ceilometer-0" Dec 10 12:36:31 crc kubenswrapper[4689]: I1210 12:36:31.017074 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03149d4d-fcc6-47bd-892e-91f14288acbf-log-httpd\") pod \"ceilometer-0\" (UID: \"03149d4d-fcc6-47bd-892e-91f14288acbf\") " pod="openstack/ceilometer-0" Dec 10 12:36:31 crc kubenswrapper[4689]: I1210 12:36:31.017469 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03149d4d-fcc6-47bd-892e-91f14288acbf-run-httpd\") pod \"ceilometer-0\" (UID: \"03149d4d-fcc6-47bd-892e-91f14288acbf\") " pod="openstack/ceilometer-0" Dec 10 12:36:31 crc kubenswrapper[4689]: I1210 12:36:31.020595 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03149d4d-fcc6-47bd-892e-91f14288acbf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"03149d4d-fcc6-47bd-892e-91f14288acbf\") " pod="openstack/ceilometer-0" Dec 10 12:36:31 crc kubenswrapper[4689]: I1210 12:36:31.020663 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03149d4d-fcc6-47bd-892e-91f14288acbf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"03149d4d-fcc6-47bd-892e-91f14288acbf\") " pod="openstack/ceilometer-0" Dec 10 12:36:31 crc kubenswrapper[4689]: I1210 12:36:31.023700 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03149d4d-fcc6-47bd-892e-91f14288acbf-scripts\") pod \"ceilometer-0\" (UID: \"03149d4d-fcc6-47bd-892e-91f14288acbf\") " pod="openstack/ceilometer-0" Dec 10 12:36:31 crc kubenswrapper[4689]: I1210 12:36:31.038963 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03149d4d-fcc6-47bd-892e-91f14288acbf-config-data\") pod \"ceilometer-0\" (UID: \"03149d4d-fcc6-47bd-892e-91f14288acbf\") " pod="openstack/ceilometer-0" Dec 10 12:36:31 crc kubenswrapper[4689]: I1210 12:36:31.048630 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw28w\" (UniqueName: \"kubernetes.io/projected/03149d4d-fcc6-47bd-892e-91f14288acbf-kube-api-access-xw28w\") pod \"ceilometer-0\" (UID: \"03149d4d-fcc6-47bd-892e-91f14288acbf\") " pod="openstack/ceilometer-0" Dec 10 12:36:31 crc kubenswrapper[4689]: I1210 12:36:31.179217 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 12:36:31 crc kubenswrapper[4689]: I1210 12:36:31.627076 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:36:31 crc kubenswrapper[4689]: W1210 12:36:31.642493 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03149d4d_fcc6_47bd_892e_91f14288acbf.slice/crio-3e35a7ed967253d1cd6befa1f65438339530613e9665c4782ac813164de148fc WatchSource:0}: Error finding container 3e35a7ed967253d1cd6befa1f65438339530613e9665c4782ac813164de148fc: Status 404 returned error can't find the container with id 3e35a7ed967253d1cd6befa1f65438339530613e9665c4782ac813164de148fc Dec 10 12:36:32 crc kubenswrapper[4689]: I1210 12:36:32.513336 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d23c5773-27cf-4f04-9e99-4c47b1131134" path="/var/lib/kubelet/pods/d23c5773-27cf-4f04-9e99-4c47b1131134/volumes" Dec 10 12:36:32 crc kubenswrapper[4689]: I1210 12:36:32.549056 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03149d4d-fcc6-47bd-892e-91f14288acbf","Type":"ContainerStarted","Data":"3e35a7ed967253d1cd6befa1f65438339530613e9665c4782ac813164de148fc"} Dec 10 12:36:32 crc kubenswrapper[4689]: I1210 12:36:32.565256 4689 generic.go:334] "Generic (PLEG): container finished" podID="aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c" containerID="f3fc097cd14dd750789d612bb5c1f1ac70f37fcc67df585c29527ae53325bc07" exitCode=0 Dec 10 12:36:32 crc kubenswrapper[4689]: I1210 12:36:32.565324 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c","Type":"ContainerDied","Data":"f3fc097cd14dd750789d612bb5c1f1ac70f37fcc67df585c29527ae53325bc07"} Dec 10 12:36:32 crc kubenswrapper[4689]: I1210 12:36:32.581436 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:36:33 crc kubenswrapper[4689]: I1210 12:36:33.581750 4689 generic.go:334] "Generic (PLEG): container finished" podID="cf5e0487-a380-424f-aa29-f815b50550db" containerID="eba3388b6217b672df9bdd08c3a6c98eb3af7448cf48f3d2ee833c4519993a83" exitCode=0 Dec 10 12:36:33 crc kubenswrapper[4689]: I1210 12:36:33.581960 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cf5e0487-a380-424f-aa29-f815b50550db","Type":"ContainerDied","Data":"eba3388b6217b672df9bdd08c3a6c98eb3af7448cf48f3d2ee833c4519993a83"} Dec 10 12:36:34 crc kubenswrapper[4689]: I1210 12:36:34.928112 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 12:36:34 crc kubenswrapper[4689]: I1210 12:36:34.986147 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.034775 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-combined-ca-bundle\") pod \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") " Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.034842 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-httpd-run\") pod \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") " Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.034957 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkwvg\" (UniqueName: \"kubernetes.io/projected/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-kube-api-access-xkwvg\") pod \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") " Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.035003 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-logs\") pod \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") " Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.035031 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") " Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.035057 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-public-tls-certs\") pod \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") " Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.035102 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-scripts\") pod \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") " Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.035138 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-config-data\") pod \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\" (UID: \"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c\") " Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.035355 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c" (UID: "aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.035612 4689 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.037355 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-logs" (OuterVolumeSpecName: "logs") pod "aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c" (UID: "aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.041593 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-scripts" (OuterVolumeSpecName: "scripts") pod "aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c" (UID: "aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.050382 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-kube-api-access-xkwvg" (OuterVolumeSpecName: "kube-api-access-xkwvg") pod "aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c" (UID: "aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c"). InnerVolumeSpecName "kube-api-access-xkwvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.052615 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c" (UID: "aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.096728 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c" (UID: "aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.105801 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-config-data" (OuterVolumeSpecName: "config-data") pod "aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c" (UID: "aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.130186 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c" (UID: "aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.136819 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf5e0487-a380-424f-aa29-f815b50550db-httpd-run\") pod \"cf5e0487-a380-424f-aa29-f815b50550db\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") " Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.136867 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf5e0487-a380-424f-aa29-f815b50550db-config-data\") pod \"cf5e0487-a380-424f-aa29-f815b50550db\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") " Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.136897 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf5e0487-a380-424f-aa29-f815b50550db-combined-ca-bundle\") pod \"cf5e0487-a380-424f-aa29-f815b50550db\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") " Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.137162 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf5e0487-a380-424f-aa29-f815b50550db-logs\") pod \"cf5e0487-a380-424f-aa29-f815b50550db\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") " Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.137215 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44fkc\" (UniqueName: \"kubernetes.io/projected/cf5e0487-a380-424f-aa29-f815b50550db-kube-api-access-44fkc\") pod \"cf5e0487-a380-424f-aa29-f815b50550db\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") " Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.137240 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cf5e0487-a380-424f-aa29-f815b50550db\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") " Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.137261 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf5e0487-a380-424f-aa29-f815b50550db-scripts\") pod \"cf5e0487-a380-424f-aa29-f815b50550db\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") " Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.137314 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf5e0487-a380-424f-aa29-f815b50550db-internal-tls-certs\") pod \"cf5e0487-a380-424f-aa29-f815b50550db\" (UID: \"cf5e0487-a380-424f-aa29-f815b50550db\") " Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.137640 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkwvg\" (UniqueName: \"kubernetes.io/projected/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-kube-api-access-xkwvg\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.137651 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-logs\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.137668 4689 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.137678 4689 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.137687 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.137696 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.137706 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.138304 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf5e0487-a380-424f-aa29-f815b50550db-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "cf5e0487-a380-424f-aa29-f815b50550db" (UID: "cf5e0487-a380-424f-aa29-f815b50550db"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.138472 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf5e0487-a380-424f-aa29-f815b50550db-logs" (OuterVolumeSpecName: "logs") pod "cf5e0487-a380-424f-aa29-f815b50550db" (UID: "cf5e0487-a380-424f-aa29-f815b50550db"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.142768 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "cf5e0487-a380-424f-aa29-f815b50550db" (UID: "cf5e0487-a380-424f-aa29-f815b50550db"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.143812 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf5e0487-a380-424f-aa29-f815b50550db-scripts" (OuterVolumeSpecName: "scripts") pod "cf5e0487-a380-424f-aa29-f815b50550db" (UID: "cf5e0487-a380-424f-aa29-f815b50550db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.144044 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf5e0487-a380-424f-aa29-f815b50550db-kube-api-access-44fkc" (OuterVolumeSpecName: "kube-api-access-44fkc") pod "cf5e0487-a380-424f-aa29-f815b50550db" (UID: "cf5e0487-a380-424f-aa29-f815b50550db"). InnerVolumeSpecName "kube-api-access-44fkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.158403 4689 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.187213 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf5e0487-a380-424f-aa29-f815b50550db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf5e0487-a380-424f-aa29-f815b50550db" (UID: "cf5e0487-a380-424f-aa29-f815b50550db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.224185 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf5e0487-a380-424f-aa29-f815b50550db-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cf5e0487-a380-424f-aa29-f815b50550db" (UID: "cf5e0487-a380-424f-aa29-f815b50550db"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.227740 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf5e0487-a380-424f-aa29-f815b50550db-config-data" (OuterVolumeSpecName: "config-data") pod "cf5e0487-a380-424f-aa29-f815b50550db" (UID: "cf5e0487-a380-424f-aa29-f815b50550db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.239578 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf5e0487-a380-424f-aa29-f815b50550db-logs\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.239738 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44fkc\" (UniqueName: \"kubernetes.io/projected/cf5e0487-a380-424f-aa29-f815b50550db-kube-api-access-44fkc\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.239848 4689 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.239933 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf5e0487-a380-424f-aa29-f815b50550db-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.240034 4689 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf5e0487-a380-424f-aa29-f815b50550db-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.240146 4689 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf5e0487-a380-424f-aa29-f815b50550db-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.240238 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf5e0487-a380-424f-aa29-f815b50550db-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.240340 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf5e0487-a380-424f-aa29-f815b50550db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.240428 4689 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.265918 4689 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.342063 4689 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.499138 4689 scope.go:117] "RemoveContainer" containerID="7ccfc65bb3226c03c022b28191c809b15870a9f345d62487e40f648b04f7f62f" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.634479 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cf5e0487-a380-424f-aa29-f815b50550db","Type":"ContainerDied","Data":"7609f65d481dc88fc595f1dbf6ba25da9a6220b5fff5ba5dcf82ea74a51fe895"} Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.634516 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.634554 4689 scope.go:117] "RemoveContainer" containerID="eba3388b6217b672df9bdd08c3a6c98eb3af7448cf48f3d2ee833c4519993a83" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.639653 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"7841c545-a50f-4add-8f2f-0ab8310938af","Type":"ContainerStarted","Data":"065d1806af2bf1b709c33614d153aa94f3a4fda803ecff2ddf7848b27cc02a44"} Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.639838 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-inspector-0" podUID="7841c545-a50f-4add-8f2f-0ab8310938af" containerName="inspector-pxe-init" containerID="cri-o://065d1806af2bf1b709c33614d153aa94f3a4fda803ecff2ddf7848b27cc02a44" gracePeriod=60 Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.652678 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"23e46f1d-5919-4baa-aeef-1364104b63fb","Type":"ContainerStarted","Data":"aa27da8c4e7a78e649c0e949521a043b34fd5672b0595aacffd2fa4d17540789"} Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.655293 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03149d4d-fcc6-47bd-892e-91f14288acbf","Type":"ContainerStarted","Data":"aa058ab1f1b1468372258e467c964141b3f1a98509967654ade82227147f357f"} Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.656823 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c","Type":"ContainerDied","Data":"db046a134769a99000c0fc3914d4138d965fd8c037c767e9f7fe2ebd49508bb3"} Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.656899 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.732988 4689 scope.go:117] "RemoveContainer" containerID="2a55af2cef7a7e1d0e6e21d2022cf74765424cde599a0aa0a053d82425f86759" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.735397 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.744922 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.752862 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.762697 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.774127 4689 scope.go:117] "RemoveContainer" containerID="f3fc097cd14dd750789d612bb5c1f1ac70f37fcc67df585c29527ae53325bc07" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.797409 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 12:36:35 crc kubenswrapper[4689]: E1210 12:36:35.797877 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf5e0487-a380-424f-aa29-f815b50550db" containerName="glance-log" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.797894 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf5e0487-a380-424f-aa29-f815b50550db" containerName="glance-log" Dec 10 12:36:35 crc kubenswrapper[4689]: E1210 12:36:35.797919 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf5e0487-a380-424f-aa29-f815b50550db" containerName="glance-httpd" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.797926 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf5e0487-a380-424f-aa29-f815b50550db" containerName="glance-httpd" Dec 10 12:36:35 crc kubenswrapper[4689]: E1210 12:36:35.797938 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c" containerName="glance-log" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.797944 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c" containerName="glance-log" Dec 10 12:36:35 crc kubenswrapper[4689]: E1210 12:36:35.797956 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c" containerName="glance-httpd" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.797962 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c" containerName="glance-httpd" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.798146 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c" containerName="glance-httpd" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.798173 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf5e0487-a380-424f-aa29-f815b50550db" containerName="glance-httpd" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.798186 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf5e0487-a380-424f-aa29-f815b50550db" containerName="glance-log" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.798200 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c" containerName="glance-log" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.799117 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.803224 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.803246 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-drvxw" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.803520 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.803651 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.816405 4689 scope.go:117] "RemoveContainer" containerID="04acef4caa91113cb7fa3a016eba65086b09c90d1343a0fae707547215a21338" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.821168 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.823209 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.827391 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.827558 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.852956 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.897669 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.959883 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"3783c348-e04a-4246-ae21-d47d6bae3467\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.959935 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3783c348-e04a-4246-ae21-d47d6bae3467-logs\") pod \"glance-default-internal-api-0\" (UID: \"3783c348-e04a-4246-ae21-d47d6bae3467\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.959962 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce240e41-0473-47c3-8349-854caa2baad2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ce240e41-0473-47c3-8349-854caa2baad2\") " pod="openstack/glance-default-external-api-0" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.959992 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3783c348-e04a-4246-ae21-d47d6bae3467-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3783c348-e04a-4246-ae21-d47d6bae3467\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.960007 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3783c348-e04a-4246-ae21-d47d6bae3467-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3783c348-e04a-4246-ae21-d47d6bae3467\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.960025 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3783c348-e04a-4246-ae21-d47d6bae3467-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3783c348-e04a-4246-ae21-d47d6bae3467\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.960049 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce240e41-0473-47c3-8349-854caa2baad2-logs\") pod \"glance-default-external-api-0\" (UID: \"ce240e41-0473-47c3-8349-854caa2baad2\") " pod="openstack/glance-default-external-api-0" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.966213 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce240e41-0473-47c3-8349-854caa2baad2-config-data\") pod \"glance-default-external-api-0\" (UID: \"ce240e41-0473-47c3-8349-854caa2baad2\") " pod="openstack/glance-default-external-api-0" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.966263 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbr7s\" (UniqueName: \"kubernetes.io/projected/3783c348-e04a-4246-ae21-d47d6bae3467-kube-api-access-hbr7s\") pod \"glance-default-internal-api-0\" (UID: \"3783c348-e04a-4246-ae21-d47d6bae3467\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.966294 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce240e41-0473-47c3-8349-854caa2baad2-scripts\") pod \"glance-default-external-api-0\" (UID: \"ce240e41-0473-47c3-8349-854caa2baad2\") " pod="openstack/glance-default-external-api-0" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.966343 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3783c348-e04a-4246-ae21-d47d6bae3467-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3783c348-e04a-4246-ae21-d47d6bae3467\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.966373 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3783c348-e04a-4246-ae21-d47d6bae3467-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3783c348-e04a-4246-ae21-d47d6bae3467\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.966406 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ce240e41-0473-47c3-8349-854caa2baad2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ce240e41-0473-47c3-8349-854caa2baad2\") " pod="openstack/glance-default-external-api-0" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.966422 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce240e41-0473-47c3-8349-854caa2baad2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ce240e41-0473-47c3-8349-854caa2baad2\") " pod="openstack/glance-default-external-api-0" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.966479 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"ce240e41-0473-47c3-8349-854caa2baad2\") " pod="openstack/glance-default-external-api-0" Dec 10 12:36:35 crc kubenswrapper[4689]: I1210 12:36:35.966505 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk456\" (UniqueName: \"kubernetes.io/projected/ce240e41-0473-47c3-8349-854caa2baad2-kube-api-access-lk456\") pod \"glance-default-external-api-0\" (UID: \"ce240e41-0473-47c3-8349-854caa2baad2\") " pod="openstack/glance-default-external-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.071115 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ce240e41-0473-47c3-8349-854caa2baad2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ce240e41-0473-47c3-8349-854caa2baad2\") " pod="openstack/glance-default-external-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.071158 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce240e41-0473-47c3-8349-854caa2baad2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ce240e41-0473-47c3-8349-854caa2baad2\") " pod="openstack/glance-default-external-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.071195 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"ce240e41-0473-47c3-8349-854caa2baad2\") " pod="openstack/glance-default-external-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.071215 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk456\" (UniqueName: \"kubernetes.io/projected/ce240e41-0473-47c3-8349-854caa2baad2-kube-api-access-lk456\") pod \"glance-default-external-api-0\" (UID: \"ce240e41-0473-47c3-8349-854caa2baad2\") " pod="openstack/glance-default-external-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.071253 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"3783c348-e04a-4246-ae21-d47d6bae3467\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.071276 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3783c348-e04a-4246-ae21-d47d6bae3467-logs\") pod \"glance-default-internal-api-0\" (UID: \"3783c348-e04a-4246-ae21-d47d6bae3467\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.071296 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3783c348-e04a-4246-ae21-d47d6bae3467-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3783c348-e04a-4246-ae21-d47d6bae3467\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.071312 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3783c348-e04a-4246-ae21-d47d6bae3467-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3783c348-e04a-4246-ae21-d47d6bae3467\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.071327 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce240e41-0473-47c3-8349-854caa2baad2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ce240e41-0473-47c3-8349-854caa2baad2\") " pod="openstack/glance-default-external-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.071527 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3783c348-e04a-4246-ae21-d47d6bae3467-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3783c348-e04a-4246-ae21-d47d6bae3467\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.071551 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce240e41-0473-47c3-8349-854caa2baad2-logs\") pod \"glance-default-external-api-0\" (UID: \"ce240e41-0473-47c3-8349-854caa2baad2\") " pod="openstack/glance-default-external-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.071624 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce240e41-0473-47c3-8349-854caa2baad2-config-data\") pod \"glance-default-external-api-0\" (UID: \"ce240e41-0473-47c3-8349-854caa2baad2\") " pod="openstack/glance-default-external-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.071644 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbr7s\" (UniqueName: \"kubernetes.io/projected/3783c348-e04a-4246-ae21-d47d6bae3467-kube-api-access-hbr7s\") pod \"glance-default-internal-api-0\" (UID: \"3783c348-e04a-4246-ae21-d47d6bae3467\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.071680 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce240e41-0473-47c3-8349-854caa2baad2-scripts\") pod \"glance-default-external-api-0\" (UID: \"ce240e41-0473-47c3-8349-854caa2baad2\") " pod="openstack/glance-default-external-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.071709 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3783c348-e04a-4246-ae21-d47d6bae3467-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3783c348-e04a-4246-ae21-d47d6bae3467\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.071729 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3783c348-e04a-4246-ae21-d47d6bae3467-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3783c348-e04a-4246-ae21-d47d6bae3467\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.072935 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ce240e41-0473-47c3-8349-854caa2baad2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ce240e41-0473-47c3-8349-854caa2baad2\") " pod="openstack/glance-default-external-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.080427 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"3783c348-e04a-4246-ae21-d47d6bae3467\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.080492 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce240e41-0473-47c3-8349-854caa2baad2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ce240e41-0473-47c3-8349-854caa2baad2\") " pod="openstack/glance-default-external-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.080501 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3783c348-e04a-4246-ae21-d47d6bae3467-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3783c348-e04a-4246-ae21-d47d6bae3467\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.080731 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce240e41-0473-47c3-8349-854caa2baad2-logs\") pod \"glance-default-external-api-0\" (UID: \"ce240e41-0473-47c3-8349-854caa2baad2\") " pod="openstack/glance-default-external-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.080859 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3783c348-e04a-4246-ae21-d47d6bae3467-logs\") pod \"glance-default-internal-api-0\" (UID: \"3783c348-e04a-4246-ae21-d47d6bae3467\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.081486 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"ce240e41-0473-47c3-8349-854caa2baad2\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.081535 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3783c348-e04a-4246-ae21-d47d6bae3467-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3783c348-e04a-4246-ae21-d47d6bae3467\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.081556 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce240e41-0473-47c3-8349-854caa2baad2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ce240e41-0473-47c3-8349-854caa2baad2\") " pod="openstack/glance-default-external-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.084625 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3783c348-e04a-4246-ae21-d47d6bae3467-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3783c348-e04a-4246-ae21-d47d6bae3467\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.092319 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3783c348-e04a-4246-ae21-d47d6bae3467-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3783c348-e04a-4246-ae21-d47d6bae3467\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.092513 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce240e41-0473-47c3-8349-854caa2baad2-scripts\") pod \"glance-default-external-api-0\" (UID: \"ce240e41-0473-47c3-8349-854caa2baad2\") " pod="openstack/glance-default-external-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.098509 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3783c348-e04a-4246-ae21-d47d6bae3467-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3783c348-e04a-4246-ae21-d47d6bae3467\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.114166 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce240e41-0473-47c3-8349-854caa2baad2-config-data\") pod \"glance-default-external-api-0\" (UID: \"ce240e41-0473-47c3-8349-854caa2baad2\") " pod="openstack/glance-default-external-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.115000 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbr7s\" (UniqueName: \"kubernetes.io/projected/3783c348-e04a-4246-ae21-d47d6bae3467-kube-api-access-hbr7s\") pod \"glance-default-internal-api-0\" (UID: \"3783c348-e04a-4246-ae21-d47d6bae3467\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.119713 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk456\" (UniqueName: \"kubernetes.io/projected/ce240e41-0473-47c3-8349-854caa2baad2-kube-api-access-lk456\") pod \"glance-default-external-api-0\" (UID: \"ce240e41-0473-47c3-8349-854caa2baad2\") " pod="openstack/glance-default-external-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.141704 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"3783c348-e04a-4246-ae21-d47d6bae3467\") " pod="openstack/glance-default-internal-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.164490 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.172420 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"ce240e41-0473-47c3-8349-854caa2baad2\") " pod="openstack/glance-default-external-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.189944 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.223544 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.381584 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7841c545-a50f-4add-8f2f-0ab8310938af-combined-ca-bundle\") pod \"7841c545-a50f-4add-8f2f-0ab8310938af\" (UID: \"7841c545-a50f-4add-8f2f-0ab8310938af\") " Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.381910 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm4nf\" (UniqueName: \"kubernetes.io/projected/7841c545-a50f-4add-8f2f-0ab8310938af-kube-api-access-hm4nf\") pod \"7841c545-a50f-4add-8f2f-0ab8310938af\" (UID: \"7841c545-a50f-4add-8f2f-0ab8310938af\") " Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.382031 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7841c545-a50f-4add-8f2f-0ab8310938af-config\") pod \"7841c545-a50f-4add-8f2f-0ab8310938af\" (UID: \"7841c545-a50f-4add-8f2f-0ab8310938af\") " Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.382095 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/7841c545-a50f-4add-8f2f-0ab8310938af-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"7841c545-a50f-4add-8f2f-0ab8310938af\" (UID: \"7841c545-a50f-4add-8f2f-0ab8310938af\") " Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.382118 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/7841c545-a50f-4add-8f2f-0ab8310938af-var-lib-ironic\") pod \"7841c545-a50f-4add-8f2f-0ab8310938af\" (UID: \"7841c545-a50f-4add-8f2f-0ab8310938af\") " Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.382207 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7841c545-a50f-4add-8f2f-0ab8310938af-etc-podinfo\") pod \"7841c545-a50f-4add-8f2f-0ab8310938af\" (UID: \"7841c545-a50f-4add-8f2f-0ab8310938af\") " Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.382258 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7841c545-a50f-4add-8f2f-0ab8310938af-scripts\") pod \"7841c545-a50f-4add-8f2f-0ab8310938af\" (UID: \"7841c545-a50f-4add-8f2f-0ab8310938af\") " Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.393217 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7841c545-a50f-4add-8f2f-0ab8310938af-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "7841c545-a50f-4add-8f2f-0ab8310938af" (UID: "7841c545-a50f-4add-8f2f-0ab8310938af"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.394045 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7841c545-a50f-4add-8f2f-0ab8310938af-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "7841c545-a50f-4add-8f2f-0ab8310938af" (UID: "7841c545-a50f-4add-8f2f-0ab8310938af"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.394158 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7841c545-a50f-4add-8f2f-0ab8310938af-scripts" (OuterVolumeSpecName: "scripts") pod "7841c545-a50f-4add-8f2f-0ab8310938af" (UID: "7841c545-a50f-4add-8f2f-0ab8310938af"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.398191 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7841c545-a50f-4add-8f2f-0ab8310938af-config" (OuterVolumeSpecName: "config") pod "7841c545-a50f-4add-8f2f-0ab8310938af" (UID: "7841c545-a50f-4add-8f2f-0ab8310938af"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.402089 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7841c545-a50f-4add-8f2f-0ab8310938af-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "7841c545-a50f-4add-8f2f-0ab8310938af" (UID: "7841c545-a50f-4add-8f2f-0ab8310938af"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.406124 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7841c545-a50f-4add-8f2f-0ab8310938af-kube-api-access-hm4nf" (OuterVolumeSpecName: "kube-api-access-hm4nf") pod "7841c545-a50f-4add-8f2f-0ab8310938af" (UID: "7841c545-a50f-4add-8f2f-0ab8310938af"). InnerVolumeSpecName "kube-api-access-hm4nf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.484480 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7841c545-a50f-4add-8f2f-0ab8310938af-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.484510 4689 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/7841c545-a50f-4add-8f2f-0ab8310938af-var-lib-ironic\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.484541 4689 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/7841c545-a50f-4add-8f2f-0ab8310938af-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.484554 4689 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7841c545-a50f-4add-8f2f-0ab8310938af-etc-podinfo\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.484564 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7841c545-a50f-4add-8f2f-0ab8310938af-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.484574 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hm4nf\" (UniqueName: \"kubernetes.io/projected/7841c545-a50f-4add-8f2f-0ab8310938af-kube-api-access-hm4nf\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.494109 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7841c545-a50f-4add-8f2f-0ab8310938af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7841c545-a50f-4add-8f2f-0ab8310938af" (UID: "7841c545-a50f-4add-8f2f-0ab8310938af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.519351 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c" path="/var/lib/kubelet/pods/aac151fb-e17c-4bd3-b2c9-bd1ed2e01c7c/volumes" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.520201 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf5e0487-a380-424f-aa29-f815b50550db" path="/var/lib/kubelet/pods/cf5e0487-a380-424f-aa29-f815b50550db/volumes" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.587876 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7841c545-a50f-4add-8f2f-0ab8310938af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.670437 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03149d4d-fcc6-47bd-892e-91f14288acbf","Type":"ContainerStarted","Data":"e73aefdfa6147479d3d152cfbcbc25d66f9d70b2ea78a1c46b16292cef8ccf42"} Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.670849 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03149d4d-fcc6-47bd-892e-91f14288acbf","Type":"ContainerStarted","Data":"cdd90699e48de80cf56fce7610ec11d926c7f6f3a4877aa78fc5f17dc66f403d"} Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.679175 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-6f4566d7bf-hkx2g" event={"ID":"234f8267-1974-4f9e-9d13-8a239ff2660c","Type":"ContainerStarted","Data":"c11a1a13efab467aa67b28b4281f9d67c14d4b62783930080b5969b338b1cec6"} Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.680386 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-6f4566d7bf-hkx2g" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.695570 4689 generic.go:334] "Generic (PLEG): container finished" podID="7841c545-a50f-4add-8f2f-0ab8310938af" containerID="065d1806af2bf1b709c33614d153aa94f3a4fda803ecff2ddf7848b27cc02a44" exitCode=0 Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.695614 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"7841c545-a50f-4add-8f2f-0ab8310938af","Type":"ContainerDied","Data":"065d1806af2bf1b709c33614d153aa94f3a4fda803ecff2ddf7848b27cc02a44"} Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.695641 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"7841c545-a50f-4add-8f2f-0ab8310938af","Type":"ContainerDied","Data":"2989803a245d67c08a5770a96f5b1cdbaf6a640445decdb771abaedc15e4ebc6"} Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.695658 4689 scope.go:117] "RemoveContainer" containerID="065d1806af2bf1b709c33614d153aa94f3a4fda803ecff2ddf7848b27cc02a44" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.695814 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.723478 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.740162 4689 scope.go:117] "RemoveContainer" containerID="f5b3f5d7e0db2e3809ac1a54515b44c30d5d3c90881fbc6488396050dd103d85" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.831705 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.845462 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-0"] Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.854027 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Dec 10 12:36:36 crc kubenswrapper[4689]: E1210 12:36:36.854498 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7841c545-a50f-4add-8f2f-0ab8310938af" containerName="inspector-pxe-init" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.854517 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="7841c545-a50f-4add-8f2f-0ab8310938af" containerName="inspector-pxe-init" Dec 10 12:36:36 crc kubenswrapper[4689]: E1210 12:36:36.854535 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7841c545-a50f-4add-8f2f-0ab8310938af" containerName="ironic-python-agent-init" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.854541 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="7841c545-a50f-4add-8f2f-0ab8310938af" containerName="ironic-python-agent-init" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.854742 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="7841c545-a50f-4add-8f2f-0ab8310938af" containerName="inspector-pxe-init" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.864668 4689 scope.go:117] "RemoveContainer" containerID="065d1806af2bf1b709c33614d153aa94f3a4fda803ecff2ddf7848b27cc02a44" Dec 10 12:36:36 crc kubenswrapper[4689]: E1210 12:36:36.865115 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"065d1806af2bf1b709c33614d153aa94f3a4fda803ecff2ddf7848b27cc02a44\": container with ID starting with 065d1806af2bf1b709c33614d153aa94f3a4fda803ecff2ddf7848b27cc02a44 not found: ID does not exist" containerID="065d1806af2bf1b709c33614d153aa94f3a4fda803ecff2ddf7848b27cc02a44" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.865158 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"065d1806af2bf1b709c33614d153aa94f3a4fda803ecff2ddf7848b27cc02a44"} err="failed to get container status \"065d1806af2bf1b709c33614d153aa94f3a4fda803ecff2ddf7848b27cc02a44\": rpc error: code = NotFound desc = could not find container \"065d1806af2bf1b709c33614d153aa94f3a4fda803ecff2ddf7848b27cc02a44\": container with ID starting with 065d1806af2bf1b709c33614d153aa94f3a4fda803ecff2ddf7848b27cc02a44 not found: ID does not exist" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.865186 4689 scope.go:117] "RemoveContainer" containerID="f5b3f5d7e0db2e3809ac1a54515b44c30d5d3c90881fbc6488396050dd103d85" Dec 10 12:36:36 crc kubenswrapper[4689]: E1210 12:36:36.865526 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5b3f5d7e0db2e3809ac1a54515b44c30d5d3c90881fbc6488396050dd103d85\": container with ID starting with f5b3f5d7e0db2e3809ac1a54515b44c30d5d3c90881fbc6488396050dd103d85 not found: ID does not exist" containerID="f5b3f5d7e0db2e3809ac1a54515b44c30d5d3c90881fbc6488396050dd103d85" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.865548 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5b3f5d7e0db2e3809ac1a54515b44c30d5d3c90881fbc6488396050dd103d85"} err="failed to get container status \"f5b3f5d7e0db2e3809ac1a54515b44c30d5d3c90881fbc6488396050dd103d85\": rpc error: code = NotFound desc = could not find container \"f5b3f5d7e0db2e3809ac1a54515b44c30d5d3c90881fbc6488396050dd103d85\": container with ID starting with f5b3f5d7e0db2e3809ac1a54515b44c30d5d3c90881fbc6488396050dd103d85 not found: ID does not exist" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.868585 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.868703 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.881483 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-internal-svc" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.881732 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.882057 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.882225 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-public-svc" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.944920 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.998983 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/5a770426-b384-4fc7-acc0-fa42ff536a9b-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"5a770426-b384-4fc7-acc0-fa42ff536a9b\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.999061 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/5a770426-b384-4fc7-acc0-fa42ff536a9b-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"5a770426-b384-4fc7-acc0-fa42ff536a9b\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.999085 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a770426-b384-4fc7-acc0-fa42ff536a9b-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"5a770426-b384-4fc7-acc0-fa42ff536a9b\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.999168 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5a770426-b384-4fc7-acc0-fa42ff536a9b-config\") pod \"ironic-inspector-0\" (UID: \"5a770426-b384-4fc7-acc0-fa42ff536a9b\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.999205 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/5a770426-b384-4fc7-acc0-fa42ff536a9b-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"5a770426-b384-4fc7-acc0-fa42ff536a9b\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.999295 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a770426-b384-4fc7-acc0-fa42ff536a9b-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"5a770426-b384-4fc7-acc0-fa42ff536a9b\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.999324 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a770426-b384-4fc7-acc0-fa42ff536a9b-scripts\") pod \"ironic-inspector-0\" (UID: \"5a770426-b384-4fc7-acc0-fa42ff536a9b\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.999360 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvr4w\" (UniqueName: \"kubernetes.io/projected/5a770426-b384-4fc7-acc0-fa42ff536a9b-kube-api-access-bvr4w\") pod \"ironic-inspector-0\" (UID: \"5a770426-b384-4fc7-acc0-fa42ff536a9b\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:36 crc kubenswrapper[4689]: I1210 12:36:36.999395 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a770426-b384-4fc7-acc0-fa42ff536a9b-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"5a770426-b384-4fc7-acc0-fa42ff536a9b\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:37 crc kubenswrapper[4689]: I1210 12:36:37.100923 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a770426-b384-4fc7-acc0-fa42ff536a9b-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"5a770426-b384-4fc7-acc0-fa42ff536a9b\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:37 crc kubenswrapper[4689]: I1210 12:36:37.101055 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a770426-b384-4fc7-acc0-fa42ff536a9b-scripts\") pod \"ironic-inspector-0\" (UID: \"5a770426-b384-4fc7-acc0-fa42ff536a9b\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:37 crc kubenswrapper[4689]: I1210 12:36:37.101104 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvr4w\" (UniqueName: \"kubernetes.io/projected/5a770426-b384-4fc7-acc0-fa42ff536a9b-kube-api-access-bvr4w\") pod \"ironic-inspector-0\" (UID: \"5a770426-b384-4fc7-acc0-fa42ff536a9b\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:37 crc kubenswrapper[4689]: I1210 12:36:37.101143 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a770426-b384-4fc7-acc0-fa42ff536a9b-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"5a770426-b384-4fc7-acc0-fa42ff536a9b\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:37 crc kubenswrapper[4689]: I1210 12:36:37.101203 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/5a770426-b384-4fc7-acc0-fa42ff536a9b-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"5a770426-b384-4fc7-acc0-fa42ff536a9b\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:37 crc kubenswrapper[4689]: I1210 12:36:37.101243 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/5a770426-b384-4fc7-acc0-fa42ff536a9b-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"5a770426-b384-4fc7-acc0-fa42ff536a9b\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:37 crc kubenswrapper[4689]: I1210 12:36:37.101269 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a770426-b384-4fc7-acc0-fa42ff536a9b-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"5a770426-b384-4fc7-acc0-fa42ff536a9b\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:37 crc kubenswrapper[4689]: I1210 12:36:37.101307 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5a770426-b384-4fc7-acc0-fa42ff536a9b-config\") pod \"ironic-inspector-0\" (UID: \"5a770426-b384-4fc7-acc0-fa42ff536a9b\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:37 crc kubenswrapper[4689]: I1210 12:36:37.101337 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/5a770426-b384-4fc7-acc0-fa42ff536a9b-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"5a770426-b384-4fc7-acc0-fa42ff536a9b\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:37 crc kubenswrapper[4689]: I1210 12:36:37.101830 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/5a770426-b384-4fc7-acc0-fa42ff536a9b-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"5a770426-b384-4fc7-acc0-fa42ff536a9b\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:37 crc kubenswrapper[4689]: I1210 12:36:37.104448 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/5a770426-b384-4fc7-acc0-fa42ff536a9b-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"5a770426-b384-4fc7-acc0-fa42ff536a9b\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:37 crc kubenswrapper[4689]: I1210 12:36:37.108454 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a770426-b384-4fc7-acc0-fa42ff536a9b-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"5a770426-b384-4fc7-acc0-fa42ff536a9b\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:37 crc kubenswrapper[4689]: I1210 12:36:37.108964 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a770426-b384-4fc7-acc0-fa42ff536a9b-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"5a770426-b384-4fc7-acc0-fa42ff536a9b\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:37 crc kubenswrapper[4689]: I1210 12:36:37.109301 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/5a770426-b384-4fc7-acc0-fa42ff536a9b-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"5a770426-b384-4fc7-acc0-fa42ff536a9b\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:37 crc kubenswrapper[4689]: I1210 12:36:37.110342 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a770426-b384-4fc7-acc0-fa42ff536a9b-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"5a770426-b384-4fc7-acc0-fa42ff536a9b\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:37 crc kubenswrapper[4689]: I1210 12:36:37.113925 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5a770426-b384-4fc7-acc0-fa42ff536a9b-config\") pod \"ironic-inspector-0\" (UID: \"5a770426-b384-4fc7-acc0-fa42ff536a9b\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:37 crc kubenswrapper[4689]: I1210 12:36:37.114112 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a770426-b384-4fc7-acc0-fa42ff536a9b-scripts\") pod \"ironic-inspector-0\" (UID: \"5a770426-b384-4fc7-acc0-fa42ff536a9b\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:37 crc kubenswrapper[4689]: I1210 12:36:37.123441 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvr4w\" (UniqueName: \"kubernetes.io/projected/5a770426-b384-4fc7-acc0-fa42ff536a9b-kube-api-access-bvr4w\") pod \"ironic-inspector-0\" (UID: \"5a770426-b384-4fc7-acc0-fa42ff536a9b\") " pod="openstack/ironic-inspector-0" Dec 10 12:36:37 crc kubenswrapper[4689]: I1210 12:36:37.196918 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Dec 10 12:36:37 crc kubenswrapper[4689]: I1210 12:36:37.710888 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3783c348-e04a-4246-ae21-d47d6bae3467","Type":"ContainerStarted","Data":"03f96ade42cde638deb82d8c21f7902b7c0f8e91ef399e96a5649b4afc2ed47b"} Dec 10 12:36:37 crc kubenswrapper[4689]: I1210 12:36:37.711275 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3783c348-e04a-4246-ae21-d47d6bae3467","Type":"ContainerStarted","Data":"b2b65810bc765a95e26033d27a7625909c7719bc0a059bb5afe011ac85626ad2"} Dec 10 12:36:37 crc kubenswrapper[4689]: I1210 12:36:37.716270 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ce240e41-0473-47c3-8349-854caa2baad2","Type":"ContainerStarted","Data":"3b73a01394ffa73b0ab152440bdc765677ae961bfd2ad04c1dd47e10feb865e3"} Dec 10 12:36:37 crc kubenswrapper[4689]: I1210 12:36:37.802257 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Dec 10 12:36:38 crc kubenswrapper[4689]: I1210 12:36:38.507705 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7841c545-a50f-4add-8f2f-0ab8310938af" path="/var/lib/kubelet/pods/7841c545-a50f-4add-8f2f-0ab8310938af/volumes" Dec 10 12:36:38 crc kubenswrapper[4689]: I1210 12:36:38.725246 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ce240e41-0473-47c3-8349-854caa2baad2","Type":"ContainerStarted","Data":"c4f79cabf731432da28e8d76a8188c689f789a6d2327e691117c4d090a962611"} Dec 10 12:36:38 crc kubenswrapper[4689]: I1210 12:36:38.725286 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ce240e41-0473-47c3-8349-854caa2baad2","Type":"ContainerStarted","Data":"2d97c3716379fe3dd33d6146a949db65dbd55a933792676f5454dd89229dd74c"} Dec 10 12:36:38 crc kubenswrapper[4689]: I1210 12:36:38.727369 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03149d4d-fcc6-47bd-892e-91f14288acbf","Type":"ContainerStarted","Data":"ff3505e6359fbfb2067b9abf2e8ceaa6722949496c690ca95d417a6821e50988"} Dec 10 12:36:38 crc kubenswrapper[4689]: I1210 12:36:38.727505 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="03149d4d-fcc6-47bd-892e-91f14288acbf" containerName="ceilometer-central-agent" containerID="cri-o://aa058ab1f1b1468372258e467c964141b3f1a98509967654ade82227147f357f" gracePeriod=30 Dec 10 12:36:38 crc kubenswrapper[4689]: I1210 12:36:38.727767 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 10 12:36:38 crc kubenswrapper[4689]: I1210 12:36:38.727824 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="03149d4d-fcc6-47bd-892e-91f14288acbf" containerName="proxy-httpd" containerID="cri-o://ff3505e6359fbfb2067b9abf2e8ceaa6722949496c690ca95d417a6821e50988" gracePeriod=30 Dec 10 12:36:38 crc kubenswrapper[4689]: I1210 12:36:38.727870 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="03149d4d-fcc6-47bd-892e-91f14288acbf" containerName="sg-core" containerID="cri-o://e73aefdfa6147479d3d152cfbcbc25d66f9d70b2ea78a1c46b16292cef8ccf42" gracePeriod=30 Dec 10 12:36:38 crc kubenswrapper[4689]: I1210 12:36:38.727902 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="03149d4d-fcc6-47bd-892e-91f14288acbf" containerName="ceilometer-notification-agent" containerID="cri-o://cdd90699e48de80cf56fce7610ec11d926c7f6f3a4877aa78fc5f17dc66f403d" gracePeriod=30 Dec 10 12:36:38 crc kubenswrapper[4689]: I1210 12:36:38.743619 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3783c348-e04a-4246-ae21-d47d6bae3467","Type":"ContainerStarted","Data":"c9664d227747d146aa61b75b83abe204f6c163be72c090de3e143ae34bf9db2a"} Dec 10 12:36:38 crc kubenswrapper[4689]: I1210 12:36:38.756170 4689 generic.go:334] "Generic (PLEG): container finished" podID="5a770426-b384-4fc7-acc0-fa42ff536a9b" containerID="c99caff0ec91827d37881bb3668e1ea13ccc303f229d46d535e99d7a8a1e1c62" exitCode=0 Dec 10 12:36:38 crc kubenswrapper[4689]: I1210 12:36:38.756222 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"5a770426-b384-4fc7-acc0-fa42ff536a9b","Type":"ContainerDied","Data":"c99caff0ec91827d37881bb3668e1ea13ccc303f229d46d535e99d7a8a1e1c62"} Dec 10 12:36:38 crc kubenswrapper[4689]: I1210 12:36:38.756246 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"5a770426-b384-4fc7-acc0-fa42ff536a9b","Type":"ContainerStarted","Data":"2dda790a512421e7dd53e698742045fb9b077edd1b8b733dd715dd64ad9d4d27"} Dec 10 12:36:38 crc kubenswrapper[4689]: I1210 12:36:38.759614 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.75960293 podStartE2EDuration="3.75960293s" podCreationTimestamp="2025-12-10 12:36:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:36:38.759438295 +0000 UTC m=+1266.547519433" watchObservedRunningTime="2025-12-10 12:36:38.75960293 +0000 UTC m=+1266.547684068" Dec 10 12:36:38 crc kubenswrapper[4689]: I1210 12:36:38.806696 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.337062027 podStartE2EDuration="8.806680553s" podCreationTimestamp="2025-12-10 12:36:30 +0000 UTC" firstStartedPulling="2025-12-10 12:36:31.644751882 +0000 UTC m=+1259.432833020" lastFinishedPulling="2025-12-10 12:36:38.114370418 +0000 UTC m=+1265.902451546" observedRunningTime="2025-12-10 12:36:38.801546417 +0000 UTC m=+1266.589627555" watchObservedRunningTime="2025-12-10 12:36:38.806680553 +0000 UTC m=+1266.594761691" Dec 10 12:36:38 crc kubenswrapper[4689]: I1210 12:36:38.906812 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.906793217 podStartE2EDuration="3.906793217s" podCreationTimestamp="2025-12-10 12:36:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:36:38.854297729 +0000 UTC m=+1266.642378867" watchObservedRunningTime="2025-12-10 12:36:38.906793217 +0000 UTC m=+1266.694874355" Dec 10 12:36:39 crc kubenswrapper[4689]: I1210 12:36:39.787744 4689 generic.go:334] "Generic (PLEG): container finished" podID="5a770426-b384-4fc7-acc0-fa42ff536a9b" containerID="50373d45b5af679214fa940006089fc4d761acfe7ec27842b91f4a38bba5fa3e" exitCode=0 Dec 10 12:36:39 crc kubenswrapper[4689]: I1210 12:36:39.788293 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"5a770426-b384-4fc7-acc0-fa42ff536a9b","Type":"ContainerDied","Data":"50373d45b5af679214fa940006089fc4d761acfe7ec27842b91f4a38bba5fa3e"} Dec 10 12:36:39 crc kubenswrapper[4689]: I1210 12:36:39.793381 4689 generic.go:334] "Generic (PLEG): container finished" podID="03149d4d-fcc6-47bd-892e-91f14288acbf" containerID="ff3505e6359fbfb2067b9abf2e8ceaa6722949496c690ca95d417a6821e50988" exitCode=0 Dec 10 12:36:39 crc kubenswrapper[4689]: I1210 12:36:39.793411 4689 generic.go:334] "Generic (PLEG): container finished" podID="03149d4d-fcc6-47bd-892e-91f14288acbf" containerID="e73aefdfa6147479d3d152cfbcbc25d66f9d70b2ea78a1c46b16292cef8ccf42" exitCode=2 Dec 10 12:36:39 crc kubenswrapper[4689]: I1210 12:36:39.793420 4689 generic.go:334] "Generic (PLEG): container finished" podID="03149d4d-fcc6-47bd-892e-91f14288acbf" containerID="cdd90699e48de80cf56fce7610ec11d926c7f6f3a4877aa78fc5f17dc66f403d" exitCode=0 Dec 10 12:36:39 crc kubenswrapper[4689]: I1210 12:36:39.793613 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03149d4d-fcc6-47bd-892e-91f14288acbf","Type":"ContainerDied","Data":"ff3505e6359fbfb2067b9abf2e8ceaa6722949496c690ca95d417a6821e50988"} Dec 10 12:36:39 crc kubenswrapper[4689]: I1210 12:36:39.793641 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03149d4d-fcc6-47bd-892e-91f14288acbf","Type":"ContainerDied","Data":"e73aefdfa6147479d3d152cfbcbc25d66f9d70b2ea78a1c46b16292cef8ccf42"} Dec 10 12:36:39 crc kubenswrapper[4689]: I1210 12:36:39.793651 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03149d4d-fcc6-47bd-892e-91f14288acbf","Type":"ContainerDied","Data":"cdd90699e48de80cf56fce7610ec11d926c7f6f3a4877aa78fc5f17dc66f403d"} Dec 10 12:36:40 crc kubenswrapper[4689]: I1210 12:36:40.809429 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"5a770426-b384-4fc7-acc0-fa42ff536a9b","Type":"ContainerStarted","Data":"4f8d61ba14744661d73e865548e5ca5f4f763aae1476e79823ddd9abde1b5495"} Dec 10 12:36:40 crc kubenswrapper[4689]: I1210 12:36:40.957812 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-6f4566d7bf-hkx2g" Dec 10 12:36:41 crc kubenswrapper[4689]: I1210 12:36:41.831365 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"5a770426-b384-4fc7-acc0-fa42ff536a9b","Type":"ContainerStarted","Data":"fad6bc304e3bc63d91cdad1291ecabc90ec8b160bb33f1e7cb3fc6ef8ff1b91c"} Dec 10 12:36:41 crc kubenswrapper[4689]: I1210 12:36:41.831677 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"5a770426-b384-4fc7-acc0-fa42ff536a9b","Type":"ContainerStarted","Data":"96db0bda37fd4aa54c86f3262749e911ffe942ff6454c4e312d6fb27a1ce2d6a"} Dec 10 12:36:42 crc kubenswrapper[4689]: I1210 12:36:42.842457 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"5a770426-b384-4fc7-acc0-fa42ff536a9b","Type":"ContainerStarted","Data":"1b8ea8cc69f130739531952a35206671fa8172b07b6e18444f5bdd1bef90963c"} Dec 10 12:36:42 crc kubenswrapper[4689]: I1210 12:36:42.843959 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Dec 10 12:36:42 crc kubenswrapper[4689]: I1210 12:36:42.872576 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-0" podStartSLOduration=6.872559129 podStartE2EDuration="6.872559129s" podCreationTimestamp="2025-12-10 12:36:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:36:42.868896769 +0000 UTC m=+1270.656977927" watchObservedRunningTime="2025-12-10 12:36:42.872559129 +0000 UTC m=+1270.660640257" Dec 10 12:36:43 crc kubenswrapper[4689]: I1210 12:36:43.863126 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"5a770426-b384-4fc7-acc0-fa42ff536a9b","Type":"ContainerDied","Data":"96db0bda37fd4aa54c86f3262749e911ffe942ff6454c4e312d6fb27a1ce2d6a"} Dec 10 12:36:43 crc kubenswrapper[4689]: I1210 12:36:43.863606 4689 generic.go:334] "Generic (PLEG): container finished" podID="5a770426-b384-4fc7-acc0-fa42ff536a9b" containerID="96db0bda37fd4aa54c86f3262749e911ffe942ff6454c4e312d6fb27a1ce2d6a" exitCode=0 Dec 10 12:36:43 crc kubenswrapper[4689]: I1210 12:36:43.865703 4689 scope.go:117] "RemoveContainer" containerID="96db0bda37fd4aa54c86f3262749e911ffe942ff6454c4e312d6fb27a1ce2d6a" Dec 10 12:36:44 crc kubenswrapper[4689]: I1210 12:36:44.881915 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"5a770426-b384-4fc7-acc0-fa42ff536a9b","Type":"ContainerStarted","Data":"0d2bfb42888ef654c19089071408f308b195c617df024cf1bbe1e92e783d80fb"} Dec 10 12:36:45 crc kubenswrapper[4689]: I1210 12:36:45.893055 4689 generic.go:334] "Generic (PLEG): container finished" podID="03149d4d-fcc6-47bd-892e-91f14288acbf" containerID="aa058ab1f1b1468372258e467c964141b3f1a98509967654ade82227147f357f" exitCode=0 Dec 10 12:36:45 crc kubenswrapper[4689]: I1210 12:36:45.893240 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03149d4d-fcc6-47bd-892e-91f14288acbf","Type":"ContainerDied","Data":"aa058ab1f1b1468372258e467c964141b3f1a98509967654ade82227147f357f"} Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.165380 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.165662 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.191349 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.191399 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.212662 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.228635 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.232933 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.244885 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.276355 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.402635 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03149d4d-fcc6-47bd-892e-91f14288acbf-log-httpd\") pod \"03149d4d-fcc6-47bd-892e-91f14288acbf\" (UID: \"03149d4d-fcc6-47bd-892e-91f14288acbf\") " Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.402687 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03149d4d-fcc6-47bd-892e-91f14288acbf-combined-ca-bundle\") pod \"03149d4d-fcc6-47bd-892e-91f14288acbf\" (UID: \"03149d4d-fcc6-47bd-892e-91f14288acbf\") " Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.402789 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03149d4d-fcc6-47bd-892e-91f14288acbf-scripts\") pod \"03149d4d-fcc6-47bd-892e-91f14288acbf\" (UID: \"03149d4d-fcc6-47bd-892e-91f14288acbf\") " Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.402900 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03149d4d-fcc6-47bd-892e-91f14288acbf-sg-core-conf-yaml\") pod \"03149d4d-fcc6-47bd-892e-91f14288acbf\" (UID: \"03149d4d-fcc6-47bd-892e-91f14288acbf\") " Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.402937 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xw28w\" (UniqueName: \"kubernetes.io/projected/03149d4d-fcc6-47bd-892e-91f14288acbf-kube-api-access-xw28w\") pod \"03149d4d-fcc6-47bd-892e-91f14288acbf\" (UID: \"03149d4d-fcc6-47bd-892e-91f14288acbf\") " Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.403017 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03149d4d-fcc6-47bd-892e-91f14288acbf-run-httpd\") pod \"03149d4d-fcc6-47bd-892e-91f14288acbf\" (UID: \"03149d4d-fcc6-47bd-892e-91f14288acbf\") " Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.403075 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03149d4d-fcc6-47bd-892e-91f14288acbf-config-data\") pod \"03149d4d-fcc6-47bd-892e-91f14288acbf\" (UID: \"03149d4d-fcc6-47bd-892e-91f14288acbf\") " Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.403495 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03149d4d-fcc6-47bd-892e-91f14288acbf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "03149d4d-fcc6-47bd-892e-91f14288acbf" (UID: "03149d4d-fcc6-47bd-892e-91f14288acbf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.403891 4689 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03149d4d-fcc6-47bd-892e-91f14288acbf-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.404151 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03149d4d-fcc6-47bd-892e-91f14288acbf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "03149d4d-fcc6-47bd-892e-91f14288acbf" (UID: "03149d4d-fcc6-47bd-892e-91f14288acbf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.409133 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03149d4d-fcc6-47bd-892e-91f14288acbf-scripts" (OuterVolumeSpecName: "scripts") pod "03149d4d-fcc6-47bd-892e-91f14288acbf" (UID: "03149d4d-fcc6-47bd-892e-91f14288acbf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.421231 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03149d4d-fcc6-47bd-892e-91f14288acbf-kube-api-access-xw28w" (OuterVolumeSpecName: "kube-api-access-xw28w") pod "03149d4d-fcc6-47bd-892e-91f14288acbf" (UID: "03149d4d-fcc6-47bd-892e-91f14288acbf"). InnerVolumeSpecName "kube-api-access-xw28w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.453626 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03149d4d-fcc6-47bd-892e-91f14288acbf-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "03149d4d-fcc6-47bd-892e-91f14288acbf" (UID: "03149d4d-fcc6-47bd-892e-91f14288acbf"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.496580 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03149d4d-fcc6-47bd-892e-91f14288acbf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03149d4d-fcc6-47bd-892e-91f14288acbf" (UID: "03149d4d-fcc6-47bd-892e-91f14288acbf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.505365 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03149d4d-fcc6-47bd-892e-91f14288acbf-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.505394 4689 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03149d4d-fcc6-47bd-892e-91f14288acbf-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.505411 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xw28w\" (UniqueName: \"kubernetes.io/projected/03149d4d-fcc6-47bd-892e-91f14288acbf-kube-api-access-xw28w\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.505422 4689 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03149d4d-fcc6-47bd-892e-91f14288acbf-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.505435 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03149d4d-fcc6-47bd-892e-91f14288acbf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.514658 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03149d4d-fcc6-47bd-892e-91f14288acbf-config-data" (OuterVolumeSpecName: "config-data") pod "03149d4d-fcc6-47bd-892e-91f14288acbf" (UID: "03149d4d-fcc6-47bd-892e-91f14288acbf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.607226 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03149d4d-fcc6-47bd-892e-91f14288acbf-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.906809 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03149d4d-fcc6-47bd-892e-91f14288acbf","Type":"ContainerDied","Data":"3e35a7ed967253d1cd6befa1f65438339530613e9665c4782ac813164de148fc"} Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.906884 4689 scope.go:117] "RemoveContainer" containerID="ff3505e6359fbfb2067b9abf2e8ceaa6722949496c690ca95d417a6821e50988" Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.907444 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.908249 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.908282 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.908324 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.908339 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.944130 4689 scope.go:117] "RemoveContainer" containerID="e73aefdfa6147479d3d152cfbcbc25d66f9d70b2ea78a1c46b16292cef8ccf42" Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.981470 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.987747 4689 scope.go:117] "RemoveContainer" containerID="cdd90699e48de80cf56fce7610ec11d926c7f6f3a4877aa78fc5f17dc66f403d" Dec 10 12:36:46 crc kubenswrapper[4689]: I1210 12:36:46.989838 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.008087 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:36:47 crc kubenswrapper[4689]: E1210 12:36:47.008522 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03149d4d-fcc6-47bd-892e-91f14288acbf" containerName="sg-core" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.008534 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="03149d4d-fcc6-47bd-892e-91f14288acbf" containerName="sg-core" Dec 10 12:36:47 crc kubenswrapper[4689]: E1210 12:36:47.008559 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03149d4d-fcc6-47bd-892e-91f14288acbf" containerName="ceilometer-notification-agent" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.008565 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="03149d4d-fcc6-47bd-892e-91f14288acbf" containerName="ceilometer-notification-agent" Dec 10 12:36:47 crc kubenswrapper[4689]: E1210 12:36:47.008583 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03149d4d-fcc6-47bd-892e-91f14288acbf" containerName="proxy-httpd" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.008589 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="03149d4d-fcc6-47bd-892e-91f14288acbf" containerName="proxy-httpd" Dec 10 12:36:47 crc kubenswrapper[4689]: E1210 12:36:47.008605 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03149d4d-fcc6-47bd-892e-91f14288acbf" containerName="ceilometer-central-agent" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.008610 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="03149d4d-fcc6-47bd-892e-91f14288acbf" containerName="ceilometer-central-agent" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.008786 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="03149d4d-fcc6-47bd-892e-91f14288acbf" containerName="sg-core" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.008801 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="03149d4d-fcc6-47bd-892e-91f14288acbf" containerName="ceilometer-notification-agent" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.008812 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="03149d4d-fcc6-47bd-892e-91f14288acbf" containerName="proxy-httpd" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.008824 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="03149d4d-fcc6-47bd-892e-91f14288acbf" containerName="ceilometer-central-agent" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.010781 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.013076 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.013088 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.020872 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.033241 4689 scope.go:117] "RemoveContainer" containerID="aa058ab1f1b1468372258e467c964141b3f1a98509967654ade82227147f357f" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.138684 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e43da96-bc95-4f10-ad40-c4c5c32fac22-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\") " pod="openstack/ceilometer-0" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.138750 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e43da96-bc95-4f10-ad40-c4c5c32fac22-config-data\") pod \"ceilometer-0\" (UID: \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\") " pod="openstack/ceilometer-0" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.138797 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e43da96-bc95-4f10-ad40-c4c5c32fac22-log-httpd\") pod \"ceilometer-0\" (UID: \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\") " pod="openstack/ceilometer-0" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.138843 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e43da96-bc95-4f10-ad40-c4c5c32fac22-run-httpd\") pod \"ceilometer-0\" (UID: \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\") " pod="openstack/ceilometer-0" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.138941 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e43da96-bc95-4f10-ad40-c4c5c32fac22-scripts\") pod \"ceilometer-0\" (UID: \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\") " pod="openstack/ceilometer-0" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.139004 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cspbn\" (UniqueName: \"kubernetes.io/projected/1e43da96-bc95-4f10-ad40-c4c5c32fac22-kube-api-access-cspbn\") pod \"ceilometer-0\" (UID: \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\") " pod="openstack/ceilometer-0" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.139028 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e43da96-bc95-4f10-ad40-c4c5c32fac22-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\") " pod="openstack/ceilometer-0" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.199722 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.199763 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.199774 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.199785 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.202126 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.246079 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e43da96-bc95-4f10-ad40-c4c5c32fac22-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\") " pod="openstack/ceilometer-0" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.246463 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e43da96-bc95-4f10-ad40-c4c5c32fac22-config-data\") pod \"ceilometer-0\" (UID: \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\") " pod="openstack/ceilometer-0" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.246588 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e43da96-bc95-4f10-ad40-c4c5c32fac22-log-httpd\") pod \"ceilometer-0\" (UID: \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\") " pod="openstack/ceilometer-0" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.246723 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e43da96-bc95-4f10-ad40-c4c5c32fac22-run-httpd\") pod \"ceilometer-0\" (UID: \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\") " pod="openstack/ceilometer-0" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.246903 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e43da96-bc95-4f10-ad40-c4c5c32fac22-scripts\") pod \"ceilometer-0\" (UID: \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\") " pod="openstack/ceilometer-0" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.247048 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cspbn\" (UniqueName: \"kubernetes.io/projected/1e43da96-bc95-4f10-ad40-c4c5c32fac22-kube-api-access-cspbn\") pod \"ceilometer-0\" (UID: \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\") " pod="openstack/ceilometer-0" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.247151 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e43da96-bc95-4f10-ad40-c4c5c32fac22-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\") " pod="openstack/ceilometer-0" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.250231 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e43da96-bc95-4f10-ad40-c4c5c32fac22-run-httpd\") pod \"ceilometer-0\" (UID: \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\") " pod="openstack/ceilometer-0" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.251882 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e43da96-bc95-4f10-ad40-c4c5c32fac22-log-httpd\") pod \"ceilometer-0\" (UID: \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\") " pod="openstack/ceilometer-0" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.252276 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e43da96-bc95-4f10-ad40-c4c5c32fac22-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\") " pod="openstack/ceilometer-0" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.253577 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e43da96-bc95-4f10-ad40-c4c5c32fac22-config-data\") pod \"ceilometer-0\" (UID: \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\") " pod="openstack/ceilometer-0" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.254263 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e43da96-bc95-4f10-ad40-c4c5c32fac22-scripts\") pod \"ceilometer-0\" (UID: \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\") " pod="openstack/ceilometer-0" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.255856 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e43da96-bc95-4f10-ad40-c4c5c32fac22-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\") " pod="openstack/ceilometer-0" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.267711 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cspbn\" (UniqueName: \"kubernetes.io/projected/1e43da96-bc95-4f10-ad40-c4c5c32fac22-kube-api-access-cspbn\") pod \"ceilometer-0\" (UID: \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\") " pod="openstack/ceilometer-0" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.324367 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/ironic-inspector-0" podUID="5a770426-b384-4fc7-acc0-fa42ff536a9b" containerName="ironic-inspector" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.324722 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/ironic-inspector-0" podUID="5a770426-b384-4fc7-acc0-fa42ff536a9b" containerName="ironic-inspector-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.352219 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.899311 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:36:47 crc kubenswrapper[4689]: I1210 12:36:47.919225 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e43da96-bc95-4f10-ad40-c4c5c32fac22","Type":"ContainerStarted","Data":"de99cf2e7b05f968bbecb80bc0f54f74ca4ac215ca1d3dee8277e2f86027bd6c"} Dec 10 12:36:48 crc kubenswrapper[4689]: I1210 12:36:48.513929 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03149d4d-fcc6-47bd-892e-91f14288acbf" path="/var/lib/kubelet/pods/03149d4d-fcc6-47bd-892e-91f14288acbf/volumes" Dec 10 12:36:49 crc kubenswrapper[4689]: I1210 12:36:49.064436 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 10 12:36:49 crc kubenswrapper[4689]: I1210 12:36:49.064552 4689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 12:36:49 crc kubenswrapper[4689]: I1210 12:36:49.150125 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 10 12:36:49 crc kubenswrapper[4689]: I1210 12:36:49.385918 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 10 12:36:49 crc kubenswrapper[4689]: I1210 12:36:49.386044 4689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 12:36:49 crc kubenswrapper[4689]: I1210 12:36:49.389610 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 10 12:36:51 crc kubenswrapper[4689]: I1210 12:36:51.021368 4689 generic.go:334] "Generic (PLEG): container finished" podID="5a770426-b384-4fc7-acc0-fa42ff536a9b" containerID="0d2bfb42888ef654c19089071408f308b195c617df024cf1bbe1e92e783d80fb" exitCode=0 Dec 10 12:36:51 crc kubenswrapper[4689]: I1210 12:36:51.021541 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"5a770426-b384-4fc7-acc0-fa42ff536a9b","Type":"ContainerDied","Data":"0d2bfb42888ef654c19089071408f308b195c617df024cf1bbe1e92e783d80fb"} Dec 10 12:36:51 crc kubenswrapper[4689]: I1210 12:36:51.021909 4689 scope.go:117] "RemoveContainer" containerID="96db0bda37fd4aa54c86f3262749e911ffe942ff6454c4e312d6fb27a1ce2d6a" Dec 10 12:36:51 crc kubenswrapper[4689]: I1210 12:36:51.022679 4689 scope.go:117] "RemoveContainer" containerID="0d2bfb42888ef654c19089071408f308b195c617df024cf1bbe1e92e783d80fb" Dec 10 12:36:51 crc kubenswrapper[4689]: E1210 12:36:51.023028 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-inspector\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-inspector pod=ironic-inspector-0_openstack(5a770426-b384-4fc7-acc0-fa42ff536a9b)\"" pod="openstack/ironic-inspector-0" podUID="5a770426-b384-4fc7-acc0-fa42ff536a9b" Dec 10 12:36:52 crc kubenswrapper[4689]: I1210 12:36:52.035086 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e43da96-bc95-4f10-ad40-c4c5c32fac22","Type":"ContainerStarted","Data":"503c2a5ed0fb7eb53e9fcebee1e175cfdfbbe5d6f7e530e3f4b51d690048b11d"} Dec 10 12:36:52 crc kubenswrapper[4689]: I1210 12:36:52.198558 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-inspector-0" Dec 10 12:36:52 crc kubenswrapper[4689]: I1210 12:36:52.199513 4689 scope.go:117] "RemoveContainer" containerID="0d2bfb42888ef654c19089071408f308b195c617df024cf1bbe1e92e783d80fb" Dec 10 12:36:52 crc kubenswrapper[4689]: E1210 12:36:52.199819 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-inspector\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-inspector pod=ironic-inspector-0_openstack(5a770426-b384-4fc7-acc0-fa42ff536a9b)\"" pod="openstack/ironic-inspector-0" podUID="5a770426-b384-4fc7-acc0-fa42ff536a9b" Dec 10 12:36:52 crc kubenswrapper[4689]: I1210 12:36:52.938055 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-7fsv5"] Dec 10 12:36:52 crc kubenswrapper[4689]: I1210 12:36:52.939494 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7fsv5" Dec 10 12:36:52 crc kubenswrapper[4689]: I1210 12:36:52.958196 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-7fsv5"] Dec 10 12:36:52 crc kubenswrapper[4689]: I1210 12:36:52.982093 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eddd02fa-d59c-407a-80d9-6dfe1066ac88-operator-scripts\") pod \"nova-api-db-create-7fsv5\" (UID: \"eddd02fa-d59c-407a-80d9-6dfe1066ac88\") " pod="openstack/nova-api-db-create-7fsv5" Dec 10 12:36:52 crc kubenswrapper[4689]: I1210 12:36:52.982311 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2snp\" (UniqueName: \"kubernetes.io/projected/eddd02fa-d59c-407a-80d9-6dfe1066ac88-kube-api-access-v2snp\") pod \"nova-api-db-create-7fsv5\" (UID: \"eddd02fa-d59c-407a-80d9-6dfe1066ac88\") " pod="openstack/nova-api-db-create-7fsv5" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.033033 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-wqhs6"] Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.034453 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wqhs6" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.053716 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e43da96-bc95-4f10-ad40-c4c5c32fac22","Type":"ContainerStarted","Data":"af4f19a38a9bfbfc03cfd8e96effa80638f0a87cd4b83256aadeaa165c4254b3"} Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.055819 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-wqhs6"] Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.084161 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eddd02fa-d59c-407a-80d9-6dfe1066ac88-operator-scripts\") pod \"nova-api-db-create-7fsv5\" (UID: \"eddd02fa-d59c-407a-80d9-6dfe1066ac88\") " pod="openstack/nova-api-db-create-7fsv5" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.084221 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4d9408c-062f-45ad-a393-da20c66d7d40-operator-scripts\") pod \"nova-cell0-db-create-wqhs6\" (UID: \"c4d9408c-062f-45ad-a393-da20c66d7d40\") " pod="openstack/nova-cell0-db-create-wqhs6" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.084299 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2snp\" (UniqueName: \"kubernetes.io/projected/eddd02fa-d59c-407a-80d9-6dfe1066ac88-kube-api-access-v2snp\") pod \"nova-api-db-create-7fsv5\" (UID: \"eddd02fa-d59c-407a-80d9-6dfe1066ac88\") " pod="openstack/nova-api-db-create-7fsv5" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.084388 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9nvz\" (UniqueName: \"kubernetes.io/projected/c4d9408c-062f-45ad-a393-da20c66d7d40-kube-api-access-j9nvz\") pod \"nova-cell0-db-create-wqhs6\" (UID: \"c4d9408c-062f-45ad-a393-da20c66d7d40\") " pod="openstack/nova-cell0-db-create-wqhs6" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.086336 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eddd02fa-d59c-407a-80d9-6dfe1066ac88-operator-scripts\") pod \"nova-api-db-create-7fsv5\" (UID: \"eddd02fa-d59c-407a-80d9-6dfe1066ac88\") " pod="openstack/nova-api-db-create-7fsv5" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.102627 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2snp\" (UniqueName: \"kubernetes.io/projected/eddd02fa-d59c-407a-80d9-6dfe1066ac88-kube-api-access-v2snp\") pod \"nova-api-db-create-7fsv5\" (UID: \"eddd02fa-d59c-407a-80d9-6dfe1066ac88\") " pod="openstack/nova-api-db-create-7fsv5" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.143360 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-00e7-account-create-update-bbfqf"] Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.144541 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-00e7-account-create-update-bbfqf" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.150010 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.155432 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-00e7-account-create-update-bbfqf"] Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.185630 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8js26\" (UniqueName: \"kubernetes.io/projected/7c5ed288-dbbc-4c61-bd51-9c4b43375ad5-kube-api-access-8js26\") pod \"nova-api-00e7-account-create-update-bbfqf\" (UID: \"7c5ed288-dbbc-4c61-bd51-9c4b43375ad5\") " pod="openstack/nova-api-00e7-account-create-update-bbfqf" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.185712 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9nvz\" (UniqueName: \"kubernetes.io/projected/c4d9408c-062f-45ad-a393-da20c66d7d40-kube-api-access-j9nvz\") pod \"nova-cell0-db-create-wqhs6\" (UID: \"c4d9408c-062f-45ad-a393-da20c66d7d40\") " pod="openstack/nova-cell0-db-create-wqhs6" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.185820 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c5ed288-dbbc-4c61-bd51-9c4b43375ad5-operator-scripts\") pod \"nova-api-00e7-account-create-update-bbfqf\" (UID: \"7c5ed288-dbbc-4c61-bd51-9c4b43375ad5\") " pod="openstack/nova-api-00e7-account-create-update-bbfqf" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.185865 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4d9408c-062f-45ad-a393-da20c66d7d40-operator-scripts\") pod \"nova-cell0-db-create-wqhs6\" (UID: \"c4d9408c-062f-45ad-a393-da20c66d7d40\") " pod="openstack/nova-cell0-db-create-wqhs6" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.186722 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4d9408c-062f-45ad-a393-da20c66d7d40-operator-scripts\") pod \"nova-cell0-db-create-wqhs6\" (UID: \"c4d9408c-062f-45ad-a393-da20c66d7d40\") " pod="openstack/nova-cell0-db-create-wqhs6" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.206078 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9nvz\" (UniqueName: \"kubernetes.io/projected/c4d9408c-062f-45ad-a393-da20c66d7d40-kube-api-access-j9nvz\") pod \"nova-cell0-db-create-wqhs6\" (UID: \"c4d9408c-062f-45ad-a393-da20c66d7d40\") " pod="openstack/nova-cell0-db-create-wqhs6" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.253087 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-thck8"] Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.254620 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-thck8" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.271031 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7fsv5" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.275146 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-thck8"] Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.287373 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c5ed288-dbbc-4c61-bd51-9c4b43375ad5-operator-scripts\") pod \"nova-api-00e7-account-create-update-bbfqf\" (UID: \"7c5ed288-dbbc-4c61-bd51-9c4b43375ad5\") " pod="openstack/nova-api-00e7-account-create-update-bbfqf" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.287425 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2bh7\" (UniqueName: \"kubernetes.io/projected/271d3ee1-d2ae-41da-95bf-85a9c45dfad5-kube-api-access-d2bh7\") pod \"nova-cell1-db-create-thck8\" (UID: \"271d3ee1-d2ae-41da-95bf-85a9c45dfad5\") " pod="openstack/nova-cell1-db-create-thck8" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.287463 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/271d3ee1-d2ae-41da-95bf-85a9c45dfad5-operator-scripts\") pod \"nova-cell1-db-create-thck8\" (UID: \"271d3ee1-d2ae-41da-95bf-85a9c45dfad5\") " pod="openstack/nova-cell1-db-create-thck8" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.287517 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8js26\" (UniqueName: \"kubernetes.io/projected/7c5ed288-dbbc-4c61-bd51-9c4b43375ad5-kube-api-access-8js26\") pod \"nova-api-00e7-account-create-update-bbfqf\" (UID: \"7c5ed288-dbbc-4c61-bd51-9c4b43375ad5\") " pod="openstack/nova-api-00e7-account-create-update-bbfqf" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.288461 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c5ed288-dbbc-4c61-bd51-9c4b43375ad5-operator-scripts\") pod \"nova-api-00e7-account-create-update-bbfqf\" (UID: \"7c5ed288-dbbc-4c61-bd51-9c4b43375ad5\") " pod="openstack/nova-api-00e7-account-create-update-bbfqf" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.303363 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8js26\" (UniqueName: \"kubernetes.io/projected/7c5ed288-dbbc-4c61-bd51-9c4b43375ad5-kube-api-access-8js26\") pod \"nova-api-00e7-account-create-update-bbfqf\" (UID: \"7c5ed288-dbbc-4c61-bd51-9c4b43375ad5\") " pod="openstack/nova-api-00e7-account-create-update-bbfqf" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.358329 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-35fe-account-create-update-9th5x"] Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.358732 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wqhs6" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.360444 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-35fe-account-create-update-9th5x" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.364293 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.366399 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-35fe-account-create-update-9th5x"] Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.389285 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfee700e-fe0c-4e0b-90f3-ee2741a787ba-operator-scripts\") pod \"nova-cell0-35fe-account-create-update-9th5x\" (UID: \"cfee700e-fe0c-4e0b-90f3-ee2741a787ba\") " pod="openstack/nova-cell0-35fe-account-create-update-9th5x" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.389424 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq5gz\" (UniqueName: \"kubernetes.io/projected/cfee700e-fe0c-4e0b-90f3-ee2741a787ba-kube-api-access-zq5gz\") pod \"nova-cell0-35fe-account-create-update-9th5x\" (UID: \"cfee700e-fe0c-4e0b-90f3-ee2741a787ba\") " pod="openstack/nova-cell0-35fe-account-create-update-9th5x" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.389549 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2bh7\" (UniqueName: \"kubernetes.io/projected/271d3ee1-d2ae-41da-95bf-85a9c45dfad5-kube-api-access-d2bh7\") pod \"nova-cell1-db-create-thck8\" (UID: \"271d3ee1-d2ae-41da-95bf-85a9c45dfad5\") " pod="openstack/nova-cell1-db-create-thck8" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.389610 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/271d3ee1-d2ae-41da-95bf-85a9c45dfad5-operator-scripts\") pod \"nova-cell1-db-create-thck8\" (UID: \"271d3ee1-d2ae-41da-95bf-85a9c45dfad5\") " pod="openstack/nova-cell1-db-create-thck8" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.390819 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/271d3ee1-d2ae-41da-95bf-85a9c45dfad5-operator-scripts\") pod \"nova-cell1-db-create-thck8\" (UID: \"271d3ee1-d2ae-41da-95bf-85a9c45dfad5\") " pod="openstack/nova-cell1-db-create-thck8" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.408942 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2bh7\" (UniqueName: \"kubernetes.io/projected/271d3ee1-d2ae-41da-95bf-85a9c45dfad5-kube-api-access-d2bh7\") pod \"nova-cell1-db-create-thck8\" (UID: \"271d3ee1-d2ae-41da-95bf-85a9c45dfad5\") " pod="openstack/nova-cell1-db-create-thck8" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.491477 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq5gz\" (UniqueName: \"kubernetes.io/projected/cfee700e-fe0c-4e0b-90f3-ee2741a787ba-kube-api-access-zq5gz\") pod \"nova-cell0-35fe-account-create-update-9th5x\" (UID: \"cfee700e-fe0c-4e0b-90f3-ee2741a787ba\") " pod="openstack/nova-cell0-35fe-account-create-update-9th5x" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.492040 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfee700e-fe0c-4e0b-90f3-ee2741a787ba-operator-scripts\") pod \"nova-cell0-35fe-account-create-update-9th5x\" (UID: \"cfee700e-fe0c-4e0b-90f3-ee2741a787ba\") " pod="openstack/nova-cell0-35fe-account-create-update-9th5x" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.492794 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfee700e-fe0c-4e0b-90f3-ee2741a787ba-operator-scripts\") pod \"nova-cell0-35fe-account-create-update-9th5x\" (UID: \"cfee700e-fe0c-4e0b-90f3-ee2741a787ba\") " pod="openstack/nova-cell0-35fe-account-create-update-9th5x" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.495147 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-00e7-account-create-update-bbfqf" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.514280 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq5gz\" (UniqueName: \"kubernetes.io/projected/cfee700e-fe0c-4e0b-90f3-ee2741a787ba-kube-api-access-zq5gz\") pod \"nova-cell0-35fe-account-create-update-9th5x\" (UID: \"cfee700e-fe0c-4e0b-90f3-ee2741a787ba\") " pod="openstack/nova-cell0-35fe-account-create-update-9th5x" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.545517 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-8ec7-account-create-update-bztj5"] Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.546781 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8ec7-account-create-update-bztj5" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.550118 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.552851 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8ec7-account-create-update-bztj5"] Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.593844 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0eca891-5410-41d4-a578-61f77c7f5978-operator-scripts\") pod \"nova-cell1-8ec7-account-create-update-bztj5\" (UID: \"f0eca891-5410-41d4-a578-61f77c7f5978\") " pod="openstack/nova-cell1-8ec7-account-create-update-bztj5" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.593975 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gklfd\" (UniqueName: \"kubernetes.io/projected/f0eca891-5410-41d4-a578-61f77c7f5978-kube-api-access-gklfd\") pod \"nova-cell1-8ec7-account-create-update-bztj5\" (UID: \"f0eca891-5410-41d4-a578-61f77c7f5978\") " pod="openstack/nova-cell1-8ec7-account-create-update-bztj5" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.695917 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0eca891-5410-41d4-a578-61f77c7f5978-operator-scripts\") pod \"nova-cell1-8ec7-account-create-update-bztj5\" (UID: \"f0eca891-5410-41d4-a578-61f77c7f5978\") " pod="openstack/nova-cell1-8ec7-account-create-update-bztj5" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.696081 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gklfd\" (UniqueName: \"kubernetes.io/projected/f0eca891-5410-41d4-a578-61f77c7f5978-kube-api-access-gklfd\") pod \"nova-cell1-8ec7-account-create-update-bztj5\" (UID: \"f0eca891-5410-41d4-a578-61f77c7f5978\") " pod="openstack/nova-cell1-8ec7-account-create-update-bztj5" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.697206 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0eca891-5410-41d4-a578-61f77c7f5978-operator-scripts\") pod \"nova-cell1-8ec7-account-create-update-bztj5\" (UID: \"f0eca891-5410-41d4-a578-61f77c7f5978\") " pod="openstack/nova-cell1-8ec7-account-create-update-bztj5" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.697330 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-thck8" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.715573 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-35fe-account-create-update-9th5x" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.726313 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gklfd\" (UniqueName: \"kubernetes.io/projected/f0eca891-5410-41d4-a578-61f77c7f5978-kube-api-access-gklfd\") pod \"nova-cell1-8ec7-account-create-update-bztj5\" (UID: \"f0eca891-5410-41d4-a578-61f77c7f5978\") " pod="openstack/nova-cell1-8ec7-account-create-update-bztj5" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.879391 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8ec7-account-create-update-bztj5" Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.939814 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-7fsv5"] Dec 10 12:36:53 crc kubenswrapper[4689]: I1210 12:36:53.964869 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-wqhs6"] Dec 10 12:36:54 crc kubenswrapper[4689]: I1210 12:36:54.052709 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:36:54 crc kubenswrapper[4689]: I1210 12:36:54.098270 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wqhs6" event={"ID":"c4d9408c-062f-45ad-a393-da20c66d7d40","Type":"ContainerStarted","Data":"250009b5b9c7531be8494c618f7f9af132762feaec4592126e8e292af481a4ea"} Dec 10 12:36:54 crc kubenswrapper[4689]: I1210 12:36:54.101518 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7fsv5" event={"ID":"eddd02fa-d59c-407a-80d9-6dfe1066ac88","Type":"ContainerStarted","Data":"f06435225e3a3336d015252443501797764aa9994545e59e578a674226351071"} Dec 10 12:36:54 crc kubenswrapper[4689]: I1210 12:36:54.217485 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-00e7-account-create-update-bbfqf"] Dec 10 12:36:54 crc kubenswrapper[4689]: W1210 12:36:54.225473 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c5ed288_dbbc_4c61_bd51_9c4b43375ad5.slice/crio-6250353c6881bc53909c63d6dbe02f22998ae20e40b1fabd9e69128bf014710b WatchSource:0}: Error finding container 6250353c6881bc53909c63d6dbe02f22998ae20e40b1fabd9e69128bf014710b: Status 404 returned error can't find the container with id 6250353c6881bc53909c63d6dbe02f22998ae20e40b1fabd9e69128bf014710b Dec 10 12:36:54 crc kubenswrapper[4689]: I1210 12:36:54.344082 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-35fe-account-create-update-9th5x"] Dec 10 12:36:54 crc kubenswrapper[4689]: W1210 12:36:54.349712 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod271d3ee1_d2ae_41da_95bf_85a9c45dfad5.slice/crio-9064305662c233e985a1b562e3af01d23190b56df3a9934d411b4b6eed62b7b8 WatchSource:0}: Error finding container 9064305662c233e985a1b562e3af01d23190b56df3a9934d411b4b6eed62b7b8: Status 404 returned error can't find the container with id 9064305662c233e985a1b562e3af01d23190b56df3a9934d411b4b6eed62b7b8 Dec 10 12:36:54 crc kubenswrapper[4689]: I1210 12:36:54.350873 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-thck8"] Dec 10 12:36:54 crc kubenswrapper[4689]: W1210 12:36:54.380128 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfee700e_fe0c_4e0b_90f3_ee2741a787ba.slice/crio-9f60e41dd6228a7c9f14d1907dc3226dc7aa59a94aef1bee7081e9e737e3041e WatchSource:0}: Error finding container 9f60e41dd6228a7c9f14d1907dc3226dc7aa59a94aef1bee7081e9e737e3041e: Status 404 returned error can't find the container with id 9f60e41dd6228a7c9f14d1907dc3226dc7aa59a94aef1bee7081e9e737e3041e Dec 10 12:36:54 crc kubenswrapper[4689]: I1210 12:36:54.595916 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8ec7-account-create-update-bztj5"] Dec 10 12:36:54 crc kubenswrapper[4689]: W1210 12:36:54.638378 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0eca891_5410_41d4_a578_61f77c7f5978.slice/crio-f47857a1b9002cab0bfee75dcb54c99770a6f2e02ee10a50b988297af145132c WatchSource:0}: Error finding container f47857a1b9002cab0bfee75dcb54c99770a6f2e02ee10a50b988297af145132c: Status 404 returned error can't find the container with id f47857a1b9002cab0bfee75dcb54c99770a6f2e02ee10a50b988297af145132c Dec 10 12:36:55 crc kubenswrapper[4689]: I1210 12:36:55.112786 4689 generic.go:334] "Generic (PLEG): container finished" podID="eddd02fa-d59c-407a-80d9-6dfe1066ac88" containerID="57810c6ae168d22a71f107ae566ffb55109fc6f930a8c429f1714a6405afbff2" exitCode=0 Dec 10 12:36:55 crc kubenswrapper[4689]: I1210 12:36:55.112896 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7fsv5" event={"ID":"eddd02fa-d59c-407a-80d9-6dfe1066ac88","Type":"ContainerDied","Data":"57810c6ae168d22a71f107ae566ffb55109fc6f930a8c429f1714a6405afbff2"} Dec 10 12:36:55 crc kubenswrapper[4689]: I1210 12:36:55.116358 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8ec7-account-create-update-bztj5" event={"ID":"f0eca891-5410-41d4-a578-61f77c7f5978","Type":"ContainerStarted","Data":"a1502c4cc82b2b45f333cab38f291ab541350233ba82944031ed0b75d522be6d"} Dec 10 12:36:55 crc kubenswrapper[4689]: I1210 12:36:55.116420 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8ec7-account-create-update-bztj5" event={"ID":"f0eca891-5410-41d4-a578-61f77c7f5978","Type":"ContainerStarted","Data":"f47857a1b9002cab0bfee75dcb54c99770a6f2e02ee10a50b988297af145132c"} Dec 10 12:36:55 crc kubenswrapper[4689]: I1210 12:36:55.119093 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-35fe-account-create-update-9th5x" event={"ID":"cfee700e-fe0c-4e0b-90f3-ee2741a787ba","Type":"ContainerStarted","Data":"dfe87742bc67dd4d06c3db6d35f94285d0a41d12dcb2f091758ee524927c9902"} Dec 10 12:36:55 crc kubenswrapper[4689]: I1210 12:36:55.119142 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-35fe-account-create-update-9th5x" event={"ID":"cfee700e-fe0c-4e0b-90f3-ee2741a787ba","Type":"ContainerStarted","Data":"9f60e41dd6228a7c9f14d1907dc3226dc7aa59a94aef1bee7081e9e737e3041e"} Dec 10 12:36:55 crc kubenswrapper[4689]: I1210 12:36:55.123051 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-00e7-account-create-update-bbfqf" event={"ID":"7c5ed288-dbbc-4c61-bd51-9c4b43375ad5","Type":"ContainerStarted","Data":"7f7c9c7ff56b974a505028f8d4910abdef7b1ffd336c2242a335396661d88a9f"} Dec 10 12:36:55 crc kubenswrapper[4689]: I1210 12:36:55.123106 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-00e7-account-create-update-bbfqf" event={"ID":"7c5ed288-dbbc-4c61-bd51-9c4b43375ad5","Type":"ContainerStarted","Data":"6250353c6881bc53909c63d6dbe02f22998ae20e40b1fabd9e69128bf014710b"} Dec 10 12:36:55 crc kubenswrapper[4689]: I1210 12:36:55.133092 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e43da96-bc95-4f10-ad40-c4c5c32fac22","Type":"ContainerStarted","Data":"5ca8c10afe545f71866da6691390357a2a03ddde69c17eca561304cf78c8bdd3"} Dec 10 12:36:55 crc kubenswrapper[4689]: I1210 12:36:55.138586 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-thck8" event={"ID":"271d3ee1-d2ae-41da-95bf-85a9c45dfad5","Type":"ContainerStarted","Data":"813d320b91fed1fb1d627b8792922d3200812224c046a96e9f30db6aa4ad4bac"} Dec 10 12:36:55 crc kubenswrapper[4689]: I1210 12:36:55.138615 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-thck8" event={"ID":"271d3ee1-d2ae-41da-95bf-85a9c45dfad5","Type":"ContainerStarted","Data":"9064305662c233e985a1b562e3af01d23190b56df3a9934d411b4b6eed62b7b8"} Dec 10 12:36:55 crc kubenswrapper[4689]: I1210 12:36:55.154678 4689 generic.go:334] "Generic (PLEG): container finished" podID="c4d9408c-062f-45ad-a393-da20c66d7d40" containerID="229c35ec4bf7af7fac173475a3b264bfe2e88214af207f1545af54ebdcbc9a20" exitCode=0 Dec 10 12:36:55 crc kubenswrapper[4689]: I1210 12:36:55.154741 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wqhs6" event={"ID":"c4d9408c-062f-45ad-a393-da20c66d7d40","Type":"ContainerDied","Data":"229c35ec4bf7af7fac173475a3b264bfe2e88214af207f1545af54ebdcbc9a20"} Dec 10 12:36:55 crc kubenswrapper[4689]: I1210 12:36:55.156526 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-00e7-account-create-update-bbfqf" podStartSLOduration=2.156505599 podStartE2EDuration="2.156505599s" podCreationTimestamp="2025-12-10 12:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:36:55.141295724 +0000 UTC m=+1282.929376862" watchObservedRunningTime="2025-12-10 12:36:55.156505599 +0000 UTC m=+1282.944586737" Dec 10 12:36:55 crc kubenswrapper[4689]: I1210 12:36:55.164690 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-35fe-account-create-update-9th5x" podStartSLOduration=2.164667661 podStartE2EDuration="2.164667661s" podCreationTimestamp="2025-12-10 12:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:36:55.154389488 +0000 UTC m=+1282.942470626" watchObservedRunningTime="2025-12-10 12:36:55.164667661 +0000 UTC m=+1282.952748819" Dec 10 12:36:55 crc kubenswrapper[4689]: I1210 12:36:55.179729 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-8ec7-account-create-update-bztj5" podStartSLOduration=2.179708213 podStartE2EDuration="2.179708213s" podCreationTimestamp="2025-12-10 12:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:36:55.168426194 +0000 UTC m=+1282.956507332" watchObservedRunningTime="2025-12-10 12:36:55.179708213 +0000 UTC m=+1282.967789371" Dec 10 12:36:55 crc kubenswrapper[4689]: I1210 12:36:55.197778 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-thck8" podStartSLOduration=2.197754539 podStartE2EDuration="2.197754539s" podCreationTimestamp="2025-12-10 12:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:36:55.181530088 +0000 UTC m=+1282.969611226" watchObservedRunningTime="2025-12-10 12:36:55.197754539 +0000 UTC m=+1282.985835677" Dec 10 12:36:55 crc kubenswrapper[4689]: E1210 12:36:55.410791 4689 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c5ed288_dbbc_4c61_bd51_9c4b43375ad5.slice/crio-7f7c9c7ff56b974a505028f8d4910abdef7b1ffd336c2242a335396661d88a9f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod271d3ee1_d2ae_41da_95bf_85a9c45dfad5.slice/crio-conmon-813d320b91fed1fb1d627b8792922d3200812224c046a96e9f30db6aa4ad4bac.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c5ed288_dbbc_4c61_bd51_9c4b43375ad5.slice/crio-conmon-7f7c9c7ff56b974a505028f8d4910abdef7b1ffd336c2242a335396661d88a9f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0eca891_5410_41d4_a578_61f77c7f5978.slice/crio-a1502c4cc82b2b45f333cab38f291ab541350233ba82944031ed0b75d522be6d.scope\": RecentStats: unable to find data in memory cache]" Dec 10 12:36:56 crc kubenswrapper[4689]: I1210 12:36:56.165463 4689 generic.go:334] "Generic (PLEG): container finished" podID="f0eca891-5410-41d4-a578-61f77c7f5978" containerID="a1502c4cc82b2b45f333cab38f291ab541350233ba82944031ed0b75d522be6d" exitCode=0 Dec 10 12:36:56 crc kubenswrapper[4689]: I1210 12:36:56.165512 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8ec7-account-create-update-bztj5" event={"ID":"f0eca891-5410-41d4-a578-61f77c7f5978","Type":"ContainerDied","Data":"a1502c4cc82b2b45f333cab38f291ab541350233ba82944031ed0b75d522be6d"} Dec 10 12:36:56 crc kubenswrapper[4689]: I1210 12:36:56.167284 4689 generic.go:334] "Generic (PLEG): container finished" podID="cfee700e-fe0c-4e0b-90f3-ee2741a787ba" containerID="dfe87742bc67dd4d06c3db6d35f94285d0a41d12dcb2f091758ee524927c9902" exitCode=0 Dec 10 12:36:56 crc kubenswrapper[4689]: I1210 12:36:56.167364 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-35fe-account-create-update-9th5x" event={"ID":"cfee700e-fe0c-4e0b-90f3-ee2741a787ba","Type":"ContainerDied","Data":"dfe87742bc67dd4d06c3db6d35f94285d0a41d12dcb2f091758ee524927c9902"} Dec 10 12:36:56 crc kubenswrapper[4689]: I1210 12:36:56.168612 4689 generic.go:334] "Generic (PLEG): container finished" podID="7c5ed288-dbbc-4c61-bd51-9c4b43375ad5" containerID="7f7c9c7ff56b974a505028f8d4910abdef7b1ffd336c2242a335396661d88a9f" exitCode=0 Dec 10 12:36:56 crc kubenswrapper[4689]: I1210 12:36:56.168638 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-00e7-account-create-update-bbfqf" event={"ID":"7c5ed288-dbbc-4c61-bd51-9c4b43375ad5","Type":"ContainerDied","Data":"7f7c9c7ff56b974a505028f8d4910abdef7b1ffd336c2242a335396661d88a9f"} Dec 10 12:36:56 crc kubenswrapper[4689]: I1210 12:36:56.169819 4689 generic.go:334] "Generic (PLEG): container finished" podID="271d3ee1-d2ae-41da-95bf-85a9c45dfad5" containerID="813d320b91fed1fb1d627b8792922d3200812224c046a96e9f30db6aa4ad4bac" exitCode=0 Dec 10 12:36:56 crc kubenswrapper[4689]: I1210 12:36:56.169895 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-thck8" event={"ID":"271d3ee1-d2ae-41da-95bf-85a9c45dfad5","Type":"ContainerDied","Data":"813d320b91fed1fb1d627b8792922d3200812224c046a96e9f30db6aa4ad4bac"} Dec 10 12:36:56 crc kubenswrapper[4689]: I1210 12:36:56.589148 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7fsv5" Dec 10 12:36:56 crc kubenswrapper[4689]: I1210 12:36:56.593636 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wqhs6" Dec 10 12:36:56 crc kubenswrapper[4689]: I1210 12:36:56.657654 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2snp\" (UniqueName: \"kubernetes.io/projected/eddd02fa-d59c-407a-80d9-6dfe1066ac88-kube-api-access-v2snp\") pod \"eddd02fa-d59c-407a-80d9-6dfe1066ac88\" (UID: \"eddd02fa-d59c-407a-80d9-6dfe1066ac88\") " Dec 10 12:36:56 crc kubenswrapper[4689]: I1210 12:36:56.657735 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4d9408c-062f-45ad-a393-da20c66d7d40-operator-scripts\") pod \"c4d9408c-062f-45ad-a393-da20c66d7d40\" (UID: \"c4d9408c-062f-45ad-a393-da20c66d7d40\") " Dec 10 12:36:56 crc kubenswrapper[4689]: I1210 12:36:56.657870 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9nvz\" (UniqueName: \"kubernetes.io/projected/c4d9408c-062f-45ad-a393-da20c66d7d40-kube-api-access-j9nvz\") pod \"c4d9408c-062f-45ad-a393-da20c66d7d40\" (UID: \"c4d9408c-062f-45ad-a393-da20c66d7d40\") " Dec 10 12:36:56 crc kubenswrapper[4689]: I1210 12:36:56.657936 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eddd02fa-d59c-407a-80d9-6dfe1066ac88-operator-scripts\") pod \"eddd02fa-d59c-407a-80d9-6dfe1066ac88\" (UID: \"eddd02fa-d59c-407a-80d9-6dfe1066ac88\") " Dec 10 12:36:56 crc kubenswrapper[4689]: I1210 12:36:56.658212 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4d9408c-062f-45ad-a393-da20c66d7d40-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c4d9408c-062f-45ad-a393-da20c66d7d40" (UID: "c4d9408c-062f-45ad-a393-da20c66d7d40"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:36:56 crc kubenswrapper[4689]: I1210 12:36:56.658505 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4d9408c-062f-45ad-a393-da20c66d7d40-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:56 crc kubenswrapper[4689]: I1210 12:36:56.659757 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eddd02fa-d59c-407a-80d9-6dfe1066ac88-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eddd02fa-d59c-407a-80d9-6dfe1066ac88" (UID: "eddd02fa-d59c-407a-80d9-6dfe1066ac88"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:36:56 crc kubenswrapper[4689]: I1210 12:36:56.664383 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4d9408c-062f-45ad-a393-da20c66d7d40-kube-api-access-j9nvz" (OuterVolumeSpecName: "kube-api-access-j9nvz") pod "c4d9408c-062f-45ad-a393-da20c66d7d40" (UID: "c4d9408c-062f-45ad-a393-da20c66d7d40"). InnerVolumeSpecName "kube-api-access-j9nvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:36:56 crc kubenswrapper[4689]: I1210 12:36:56.664523 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eddd02fa-d59c-407a-80d9-6dfe1066ac88-kube-api-access-v2snp" (OuterVolumeSpecName: "kube-api-access-v2snp") pod "eddd02fa-d59c-407a-80d9-6dfe1066ac88" (UID: "eddd02fa-d59c-407a-80d9-6dfe1066ac88"). InnerVolumeSpecName "kube-api-access-v2snp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:36:56 crc kubenswrapper[4689]: I1210 12:36:56.760788 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2snp\" (UniqueName: \"kubernetes.io/projected/eddd02fa-d59c-407a-80d9-6dfe1066ac88-kube-api-access-v2snp\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:56 crc kubenswrapper[4689]: I1210 12:36:56.760824 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9nvz\" (UniqueName: \"kubernetes.io/projected/c4d9408c-062f-45ad-a393-da20c66d7d40-kube-api-access-j9nvz\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:56 crc kubenswrapper[4689]: I1210 12:36:56.760834 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eddd02fa-d59c-407a-80d9-6dfe1066ac88-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.179742 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e43da96-bc95-4f10-ad40-c4c5c32fac22","Type":"ContainerStarted","Data":"9660f24b1d1ac3ebe1cafbeb22414c267cca70546fb1a05bdb449b107b85b5f9"} Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.179975 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e43da96-bc95-4f10-ad40-c4c5c32fac22" containerName="ceilometer-central-agent" containerID="cri-o://503c2a5ed0fb7eb53e9fcebee1e175cfdfbbe5d6f7e530e3f4b51d690048b11d" gracePeriod=30 Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.180075 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.180430 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e43da96-bc95-4f10-ad40-c4c5c32fac22" containerName="proxy-httpd" containerID="cri-o://9660f24b1d1ac3ebe1cafbeb22414c267cca70546fb1a05bdb449b107b85b5f9" gracePeriod=30 Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.180481 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e43da96-bc95-4f10-ad40-c4c5c32fac22" containerName="sg-core" containerID="cri-o://5ca8c10afe545f71866da6691390357a2a03ddde69c17eca561304cf78c8bdd3" gracePeriod=30 Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.180515 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e43da96-bc95-4f10-ad40-c4c5c32fac22" containerName="ceilometer-notification-agent" containerID="cri-o://af4f19a38a9bfbfc03cfd8e96effa80638f0a87cd4b83256aadeaa165c4254b3" gracePeriod=30 Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.186466 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wqhs6" Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.186994 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wqhs6" event={"ID":"c4d9408c-062f-45ad-a393-da20c66d7d40","Type":"ContainerDied","Data":"250009b5b9c7531be8494c618f7f9af132762feaec4592126e8e292af481a4ea"} Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.187029 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="250009b5b9c7531be8494c618f7f9af132762feaec4592126e8e292af481a4ea" Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.188977 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7fsv5" Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.191156 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7fsv5" event={"ID":"eddd02fa-d59c-407a-80d9-6dfe1066ac88","Type":"ContainerDied","Data":"f06435225e3a3336d015252443501797764aa9994545e59e578a674226351071"} Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.191229 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f06435225e3a3336d015252443501797764aa9994545e59e578a674226351071" Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.198163 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.199450 4689 scope.go:117] "RemoveContainer" containerID="0d2bfb42888ef654c19089071408f308b195c617df024cf1bbe1e92e783d80fb" Dec 10 12:36:57 crc kubenswrapper[4689]: E1210 12:36:57.199901 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-inspector\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-inspector pod=ironic-inspector-0_openstack(5a770426-b384-4fc7-acc0-fa42ff536a9b)\"" pod="openstack/ironic-inspector-0" podUID="5a770426-b384-4fc7-acc0-fa42ff536a9b" Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.204458 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/ironic-inspector-0" podUID="5a770426-b384-4fc7-acc0-fa42ff536a9b" containerName="ironic-inspector-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.630787 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-00e7-account-create-update-bbfqf" Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.630808 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.5658824129999998 podStartE2EDuration="11.630790823s" podCreationTimestamp="2025-12-10 12:36:46 +0000 UTC" firstStartedPulling="2025-12-10 12:36:47.892870807 +0000 UTC m=+1275.680951955" lastFinishedPulling="2025-12-10 12:36:55.957779237 +0000 UTC m=+1283.745860365" observedRunningTime="2025-12-10 12:36:57.211926134 +0000 UTC m=+1285.000007272" watchObservedRunningTime="2025-12-10 12:36:57.630790823 +0000 UTC m=+1285.418871951" Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.728158 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-35fe-account-create-update-9th5x" Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.763085 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-thck8" Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.774981 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8ec7-account-create-update-bztj5" Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.778706 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c5ed288-dbbc-4c61-bd51-9c4b43375ad5-operator-scripts\") pod \"7c5ed288-dbbc-4c61-bd51-9c4b43375ad5\" (UID: \"7c5ed288-dbbc-4c61-bd51-9c4b43375ad5\") " Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.778854 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8js26\" (UniqueName: \"kubernetes.io/projected/7c5ed288-dbbc-4c61-bd51-9c4b43375ad5-kube-api-access-8js26\") pod \"7c5ed288-dbbc-4c61-bd51-9c4b43375ad5\" (UID: \"7c5ed288-dbbc-4c61-bd51-9c4b43375ad5\") " Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.779427 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c5ed288-dbbc-4c61-bd51-9c4b43375ad5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7c5ed288-dbbc-4c61-bd51-9c4b43375ad5" (UID: "7c5ed288-dbbc-4c61-bd51-9c4b43375ad5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.786932 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c5ed288-dbbc-4c61-bd51-9c4b43375ad5-kube-api-access-8js26" (OuterVolumeSpecName: "kube-api-access-8js26") pod "7c5ed288-dbbc-4c61-bd51-9c4b43375ad5" (UID: "7c5ed288-dbbc-4c61-bd51-9c4b43375ad5"). InnerVolumeSpecName "kube-api-access-8js26". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.880796 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0eca891-5410-41d4-a578-61f77c7f5978-operator-scripts\") pod \"f0eca891-5410-41d4-a578-61f77c7f5978\" (UID: \"f0eca891-5410-41d4-a578-61f77c7f5978\") " Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.880866 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2bh7\" (UniqueName: \"kubernetes.io/projected/271d3ee1-d2ae-41da-95bf-85a9c45dfad5-kube-api-access-d2bh7\") pod \"271d3ee1-d2ae-41da-95bf-85a9c45dfad5\" (UID: \"271d3ee1-d2ae-41da-95bf-85a9c45dfad5\") " Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.880915 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq5gz\" (UniqueName: \"kubernetes.io/projected/cfee700e-fe0c-4e0b-90f3-ee2741a787ba-kube-api-access-zq5gz\") pod \"cfee700e-fe0c-4e0b-90f3-ee2741a787ba\" (UID: \"cfee700e-fe0c-4e0b-90f3-ee2741a787ba\") " Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.880971 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gklfd\" (UniqueName: \"kubernetes.io/projected/f0eca891-5410-41d4-a578-61f77c7f5978-kube-api-access-gklfd\") pod \"f0eca891-5410-41d4-a578-61f77c7f5978\" (UID: \"f0eca891-5410-41d4-a578-61f77c7f5978\") " Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.881026 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfee700e-fe0c-4e0b-90f3-ee2741a787ba-operator-scripts\") pod \"cfee700e-fe0c-4e0b-90f3-ee2741a787ba\" (UID: \"cfee700e-fe0c-4e0b-90f3-ee2741a787ba\") " Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.881104 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/271d3ee1-d2ae-41da-95bf-85a9c45dfad5-operator-scripts\") pod \"271d3ee1-d2ae-41da-95bf-85a9c45dfad5\" (UID: \"271d3ee1-d2ae-41da-95bf-85a9c45dfad5\") " Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.881450 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8js26\" (UniqueName: \"kubernetes.io/projected/7c5ed288-dbbc-4c61-bd51-9c4b43375ad5-kube-api-access-8js26\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.881465 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c5ed288-dbbc-4c61-bd51-9c4b43375ad5-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.882064 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/271d3ee1-d2ae-41da-95bf-85a9c45dfad5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "271d3ee1-d2ae-41da-95bf-85a9c45dfad5" (UID: "271d3ee1-d2ae-41da-95bf-85a9c45dfad5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.882198 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0eca891-5410-41d4-a578-61f77c7f5978-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f0eca891-5410-41d4-a578-61f77c7f5978" (UID: "f0eca891-5410-41d4-a578-61f77c7f5978"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.882922 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfee700e-fe0c-4e0b-90f3-ee2741a787ba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cfee700e-fe0c-4e0b-90f3-ee2741a787ba" (UID: "cfee700e-fe0c-4e0b-90f3-ee2741a787ba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.885684 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0eca891-5410-41d4-a578-61f77c7f5978-kube-api-access-gklfd" (OuterVolumeSpecName: "kube-api-access-gklfd") pod "f0eca891-5410-41d4-a578-61f77c7f5978" (UID: "f0eca891-5410-41d4-a578-61f77c7f5978"). InnerVolumeSpecName "kube-api-access-gklfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.886568 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfee700e-fe0c-4e0b-90f3-ee2741a787ba-kube-api-access-zq5gz" (OuterVolumeSpecName: "kube-api-access-zq5gz") pod "cfee700e-fe0c-4e0b-90f3-ee2741a787ba" (UID: "cfee700e-fe0c-4e0b-90f3-ee2741a787ba"). InnerVolumeSpecName "kube-api-access-zq5gz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.886903 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/271d3ee1-d2ae-41da-95bf-85a9c45dfad5-kube-api-access-d2bh7" (OuterVolumeSpecName: "kube-api-access-d2bh7") pod "271d3ee1-d2ae-41da-95bf-85a9c45dfad5" (UID: "271d3ee1-d2ae-41da-95bf-85a9c45dfad5"). InnerVolumeSpecName "kube-api-access-d2bh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.983047 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/271d3ee1-d2ae-41da-95bf-85a9c45dfad5-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.983249 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0eca891-5410-41d4-a578-61f77c7f5978-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.983309 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2bh7\" (UniqueName: \"kubernetes.io/projected/271d3ee1-d2ae-41da-95bf-85a9c45dfad5-kube-api-access-d2bh7\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.983362 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq5gz\" (UniqueName: \"kubernetes.io/projected/cfee700e-fe0c-4e0b-90f3-ee2741a787ba-kube-api-access-zq5gz\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.983412 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gklfd\" (UniqueName: \"kubernetes.io/projected/f0eca891-5410-41d4-a578-61f77c7f5978-kube-api-access-gklfd\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:57 crc kubenswrapper[4689]: I1210 12:36:57.983499 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfee700e-fe0c-4e0b-90f3-ee2741a787ba-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:36:58 crc kubenswrapper[4689]: I1210 12:36:58.198790 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8ec7-account-create-update-bztj5" event={"ID":"f0eca891-5410-41d4-a578-61f77c7f5978","Type":"ContainerDied","Data":"f47857a1b9002cab0bfee75dcb54c99770a6f2e02ee10a50b988297af145132c"} Dec 10 12:36:58 crc kubenswrapper[4689]: I1210 12:36:58.199073 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f47857a1b9002cab0bfee75dcb54c99770a6f2e02ee10a50b988297af145132c" Dec 10 12:36:58 crc kubenswrapper[4689]: I1210 12:36:58.199129 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8ec7-account-create-update-bztj5" Dec 10 12:36:58 crc kubenswrapper[4689]: I1210 12:36:58.206411 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-35fe-account-create-update-9th5x" event={"ID":"cfee700e-fe0c-4e0b-90f3-ee2741a787ba","Type":"ContainerDied","Data":"9f60e41dd6228a7c9f14d1907dc3226dc7aa59a94aef1bee7081e9e737e3041e"} Dec 10 12:36:58 crc kubenswrapper[4689]: I1210 12:36:58.206448 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f60e41dd6228a7c9f14d1907dc3226dc7aa59a94aef1bee7081e9e737e3041e" Dec 10 12:36:58 crc kubenswrapper[4689]: I1210 12:36:58.206419 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-35fe-account-create-update-9th5x" Dec 10 12:36:58 crc kubenswrapper[4689]: I1210 12:36:58.208040 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-00e7-account-create-update-bbfqf" event={"ID":"7c5ed288-dbbc-4c61-bd51-9c4b43375ad5","Type":"ContainerDied","Data":"6250353c6881bc53909c63d6dbe02f22998ae20e40b1fabd9e69128bf014710b"} Dec 10 12:36:58 crc kubenswrapper[4689]: I1210 12:36:58.208081 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6250353c6881bc53909c63d6dbe02f22998ae20e40b1fabd9e69128bf014710b" Dec 10 12:36:58 crc kubenswrapper[4689]: I1210 12:36:58.208170 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-00e7-account-create-update-bbfqf" Dec 10 12:36:58 crc kubenswrapper[4689]: I1210 12:36:58.218210 4689 generic.go:334] "Generic (PLEG): container finished" podID="1e43da96-bc95-4f10-ad40-c4c5c32fac22" containerID="9660f24b1d1ac3ebe1cafbeb22414c267cca70546fb1a05bdb449b107b85b5f9" exitCode=0 Dec 10 12:36:58 crc kubenswrapper[4689]: I1210 12:36:58.218247 4689 generic.go:334] "Generic (PLEG): container finished" podID="1e43da96-bc95-4f10-ad40-c4c5c32fac22" containerID="5ca8c10afe545f71866da6691390357a2a03ddde69c17eca561304cf78c8bdd3" exitCode=2 Dec 10 12:36:58 crc kubenswrapper[4689]: I1210 12:36:58.218259 4689 generic.go:334] "Generic (PLEG): container finished" podID="1e43da96-bc95-4f10-ad40-c4c5c32fac22" containerID="af4f19a38a9bfbfc03cfd8e96effa80638f0a87cd4b83256aadeaa165c4254b3" exitCode=0 Dec 10 12:36:58 crc kubenswrapper[4689]: I1210 12:36:58.218301 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e43da96-bc95-4f10-ad40-c4c5c32fac22","Type":"ContainerDied","Data":"9660f24b1d1ac3ebe1cafbeb22414c267cca70546fb1a05bdb449b107b85b5f9"} Dec 10 12:36:58 crc kubenswrapper[4689]: I1210 12:36:58.218331 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e43da96-bc95-4f10-ad40-c4c5c32fac22","Type":"ContainerDied","Data":"5ca8c10afe545f71866da6691390357a2a03ddde69c17eca561304cf78c8bdd3"} Dec 10 12:36:58 crc kubenswrapper[4689]: I1210 12:36:58.218344 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e43da96-bc95-4f10-ad40-c4c5c32fac22","Type":"ContainerDied","Data":"af4f19a38a9bfbfc03cfd8e96effa80638f0a87cd4b83256aadeaa165c4254b3"} Dec 10 12:36:58 crc kubenswrapper[4689]: I1210 12:36:58.221224 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-thck8" event={"ID":"271d3ee1-d2ae-41da-95bf-85a9c45dfad5","Type":"ContainerDied","Data":"9064305662c233e985a1b562e3af01d23190b56df3a9934d411b4b6eed62b7b8"} Dec 10 12:36:58 crc kubenswrapper[4689]: I1210 12:36:58.221265 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9064305662c233e985a1b562e3af01d23190b56df3a9934d411b4b6eed62b7b8" Dec 10 12:36:58 crc kubenswrapper[4689]: I1210 12:36:58.221322 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-thck8" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.251890 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.273864 4689 generic.go:334] "Generic (PLEG): container finished" podID="1e43da96-bc95-4f10-ad40-c4c5c32fac22" containerID="503c2a5ed0fb7eb53e9fcebee1e175cfdfbbe5d6f7e530e3f4b51d690048b11d" exitCode=0 Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.273916 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e43da96-bc95-4f10-ad40-c4c5c32fac22","Type":"ContainerDied","Data":"503c2a5ed0fb7eb53e9fcebee1e175cfdfbbe5d6f7e530e3f4b51d690048b11d"} Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.273950 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e43da96-bc95-4f10-ad40-c4c5c32fac22","Type":"ContainerDied","Data":"de99cf2e7b05f968bbecb80bc0f54f74ca4ac215ca1d3dee8277e2f86027bd6c"} Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.273988 4689 scope.go:117] "RemoveContainer" containerID="9660f24b1d1ac3ebe1cafbeb22414c267cca70546fb1a05bdb449b107b85b5f9" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.274153 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.313075 4689 scope.go:117] "RemoveContainer" containerID="5ca8c10afe545f71866da6691390357a2a03ddde69c17eca561304cf78c8bdd3" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.335346 4689 scope.go:117] "RemoveContainer" containerID="af4f19a38a9bfbfc03cfd8e96effa80638f0a87cd4b83256aadeaa165c4254b3" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.369494 4689 scope.go:117] "RemoveContainer" containerID="503c2a5ed0fb7eb53e9fcebee1e175cfdfbbe5d6f7e530e3f4b51d690048b11d" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.392281 4689 scope.go:117] "RemoveContainer" containerID="9660f24b1d1ac3ebe1cafbeb22414c267cca70546fb1a05bdb449b107b85b5f9" Dec 10 12:37:03 crc kubenswrapper[4689]: E1210 12:37:03.392716 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9660f24b1d1ac3ebe1cafbeb22414c267cca70546fb1a05bdb449b107b85b5f9\": container with ID starting with 9660f24b1d1ac3ebe1cafbeb22414c267cca70546fb1a05bdb449b107b85b5f9 not found: ID does not exist" containerID="9660f24b1d1ac3ebe1cafbeb22414c267cca70546fb1a05bdb449b107b85b5f9" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.392754 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9660f24b1d1ac3ebe1cafbeb22414c267cca70546fb1a05bdb449b107b85b5f9"} err="failed to get container status \"9660f24b1d1ac3ebe1cafbeb22414c267cca70546fb1a05bdb449b107b85b5f9\": rpc error: code = NotFound desc = could not find container \"9660f24b1d1ac3ebe1cafbeb22414c267cca70546fb1a05bdb449b107b85b5f9\": container with ID starting with 9660f24b1d1ac3ebe1cafbeb22414c267cca70546fb1a05bdb449b107b85b5f9 not found: ID does not exist" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.392781 4689 scope.go:117] "RemoveContainer" containerID="5ca8c10afe545f71866da6691390357a2a03ddde69c17eca561304cf78c8bdd3" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.392799 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e43da96-bc95-4f10-ad40-c4c5c32fac22-combined-ca-bundle\") pod \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\" (UID: \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\") " Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.392872 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e43da96-bc95-4f10-ad40-c4c5c32fac22-config-data\") pod \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\" (UID: \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\") " Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.392924 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e43da96-bc95-4f10-ad40-c4c5c32fac22-scripts\") pod \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\" (UID: \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\") " Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.393029 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e43da96-bc95-4f10-ad40-c4c5c32fac22-sg-core-conf-yaml\") pod \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\" (UID: \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\") " Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.393094 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e43da96-bc95-4f10-ad40-c4c5c32fac22-run-httpd\") pod \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\" (UID: \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\") " Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.393119 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cspbn\" (UniqueName: \"kubernetes.io/projected/1e43da96-bc95-4f10-ad40-c4c5c32fac22-kube-api-access-cspbn\") pod \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\" (UID: \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\") " Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.393192 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e43da96-bc95-4f10-ad40-c4c5c32fac22-log-httpd\") pod \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\" (UID: \"1e43da96-bc95-4f10-ad40-c4c5c32fac22\") " Dec 10 12:37:03 crc kubenswrapper[4689]: E1210 12:37:03.393510 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ca8c10afe545f71866da6691390357a2a03ddde69c17eca561304cf78c8bdd3\": container with ID starting with 5ca8c10afe545f71866da6691390357a2a03ddde69c17eca561304cf78c8bdd3 not found: ID does not exist" containerID="5ca8c10afe545f71866da6691390357a2a03ddde69c17eca561304cf78c8bdd3" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.393554 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ca8c10afe545f71866da6691390357a2a03ddde69c17eca561304cf78c8bdd3"} err="failed to get container status \"5ca8c10afe545f71866da6691390357a2a03ddde69c17eca561304cf78c8bdd3\": rpc error: code = NotFound desc = could not find container \"5ca8c10afe545f71866da6691390357a2a03ddde69c17eca561304cf78c8bdd3\": container with ID starting with 5ca8c10afe545f71866da6691390357a2a03ddde69c17eca561304cf78c8bdd3 not found: ID does not exist" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.393579 4689 scope.go:117] "RemoveContainer" containerID="af4f19a38a9bfbfc03cfd8e96effa80638f0a87cd4b83256aadeaa165c4254b3" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.394196 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e43da96-bc95-4f10-ad40-c4c5c32fac22-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1e43da96-bc95-4f10-ad40-c4c5c32fac22" (UID: "1e43da96-bc95-4f10-ad40-c4c5c32fac22"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:37:03 crc kubenswrapper[4689]: E1210 12:37:03.394280 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af4f19a38a9bfbfc03cfd8e96effa80638f0a87cd4b83256aadeaa165c4254b3\": container with ID starting with af4f19a38a9bfbfc03cfd8e96effa80638f0a87cd4b83256aadeaa165c4254b3 not found: ID does not exist" containerID="af4f19a38a9bfbfc03cfd8e96effa80638f0a87cd4b83256aadeaa165c4254b3" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.394306 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af4f19a38a9bfbfc03cfd8e96effa80638f0a87cd4b83256aadeaa165c4254b3"} err="failed to get container status \"af4f19a38a9bfbfc03cfd8e96effa80638f0a87cd4b83256aadeaa165c4254b3\": rpc error: code = NotFound desc = could not find container \"af4f19a38a9bfbfc03cfd8e96effa80638f0a87cd4b83256aadeaa165c4254b3\": container with ID starting with af4f19a38a9bfbfc03cfd8e96effa80638f0a87cd4b83256aadeaa165c4254b3 not found: ID does not exist" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.394324 4689 scope.go:117] "RemoveContainer" containerID="503c2a5ed0fb7eb53e9fcebee1e175cfdfbbe5d6f7e530e3f4b51d690048b11d" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.394431 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e43da96-bc95-4f10-ad40-c4c5c32fac22-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1e43da96-bc95-4f10-ad40-c4c5c32fac22" (UID: "1e43da96-bc95-4f10-ad40-c4c5c32fac22"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:37:03 crc kubenswrapper[4689]: E1210 12:37:03.395245 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"503c2a5ed0fb7eb53e9fcebee1e175cfdfbbe5d6f7e530e3f4b51d690048b11d\": container with ID starting with 503c2a5ed0fb7eb53e9fcebee1e175cfdfbbe5d6f7e530e3f4b51d690048b11d not found: ID does not exist" containerID="503c2a5ed0fb7eb53e9fcebee1e175cfdfbbe5d6f7e530e3f4b51d690048b11d" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.395291 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"503c2a5ed0fb7eb53e9fcebee1e175cfdfbbe5d6f7e530e3f4b51d690048b11d"} err="failed to get container status \"503c2a5ed0fb7eb53e9fcebee1e175cfdfbbe5d6f7e530e3f4b51d690048b11d\": rpc error: code = NotFound desc = could not find container \"503c2a5ed0fb7eb53e9fcebee1e175cfdfbbe5d6f7e530e3f4b51d690048b11d\": container with ID starting with 503c2a5ed0fb7eb53e9fcebee1e175cfdfbbe5d6f7e530e3f4b51d690048b11d not found: ID does not exist" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.398570 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e43da96-bc95-4f10-ad40-c4c5c32fac22-scripts" (OuterVolumeSpecName: "scripts") pod "1e43da96-bc95-4f10-ad40-c4c5c32fac22" (UID: "1e43da96-bc95-4f10-ad40-c4c5c32fac22"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.413439 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e43da96-bc95-4f10-ad40-c4c5c32fac22-kube-api-access-cspbn" (OuterVolumeSpecName: "kube-api-access-cspbn") pod "1e43da96-bc95-4f10-ad40-c4c5c32fac22" (UID: "1e43da96-bc95-4f10-ad40-c4c5c32fac22"). InnerVolumeSpecName "kube-api-access-cspbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.431666 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e43da96-bc95-4f10-ad40-c4c5c32fac22-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1e43da96-bc95-4f10-ad40-c4c5c32fac22" (UID: "1e43da96-bc95-4f10-ad40-c4c5c32fac22"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.497611 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cspbn\" (UniqueName: \"kubernetes.io/projected/1e43da96-bc95-4f10-ad40-c4c5c32fac22-kube-api-access-cspbn\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.497646 4689 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e43da96-bc95-4f10-ad40-c4c5c32fac22-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.497664 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e43da96-bc95-4f10-ad40-c4c5c32fac22-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.497674 4689 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e43da96-bc95-4f10-ad40-c4c5c32fac22-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.497682 4689 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e43da96-bc95-4f10-ad40-c4c5c32fac22-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.504335 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e43da96-bc95-4f10-ad40-c4c5c32fac22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e43da96-bc95-4f10-ad40-c4c5c32fac22" (UID: "1e43da96-bc95-4f10-ad40-c4c5c32fac22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.529443 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e43da96-bc95-4f10-ad40-c4c5c32fac22-config-data" (OuterVolumeSpecName: "config-data") pod "1e43da96-bc95-4f10-ad40-c4c5c32fac22" (UID: "1e43da96-bc95-4f10-ad40-c4c5c32fac22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.598921 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e43da96-bc95-4f10-ad40-c4c5c32fac22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.599201 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e43da96-bc95-4f10-ad40-c4c5c32fac22-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.637501 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.660011 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.669353 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:37:03 crc kubenswrapper[4689]: E1210 12:37:03.670526 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="271d3ee1-d2ae-41da-95bf-85a9c45dfad5" containerName="mariadb-database-create" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.670615 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="271d3ee1-d2ae-41da-95bf-85a9c45dfad5" containerName="mariadb-database-create" Dec 10 12:37:03 crc kubenswrapper[4689]: E1210 12:37:03.670725 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfee700e-fe0c-4e0b-90f3-ee2741a787ba" containerName="mariadb-account-create-update" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.670827 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfee700e-fe0c-4e0b-90f3-ee2741a787ba" containerName="mariadb-account-create-update" Dec 10 12:37:03 crc kubenswrapper[4689]: E1210 12:37:03.670911 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e43da96-bc95-4f10-ad40-c4c5c32fac22" containerName="ceilometer-notification-agent" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.671022 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e43da96-bc95-4f10-ad40-c4c5c32fac22" containerName="ceilometer-notification-agent" Dec 10 12:37:03 crc kubenswrapper[4689]: E1210 12:37:03.671130 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c5ed288-dbbc-4c61-bd51-9c4b43375ad5" containerName="mariadb-account-create-update" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.671187 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c5ed288-dbbc-4c61-bd51-9c4b43375ad5" containerName="mariadb-account-create-update" Dec 10 12:37:03 crc kubenswrapper[4689]: E1210 12:37:03.671245 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e43da96-bc95-4f10-ad40-c4c5c32fac22" containerName="sg-core" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.671313 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e43da96-bc95-4f10-ad40-c4c5c32fac22" containerName="sg-core" Dec 10 12:37:03 crc kubenswrapper[4689]: E1210 12:37:03.671378 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e43da96-bc95-4f10-ad40-c4c5c32fac22" containerName="proxy-httpd" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.671435 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e43da96-bc95-4f10-ad40-c4c5c32fac22" containerName="proxy-httpd" Dec 10 12:37:03 crc kubenswrapper[4689]: E1210 12:37:03.671500 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0eca891-5410-41d4-a578-61f77c7f5978" containerName="mariadb-account-create-update" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.684232 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0eca891-5410-41d4-a578-61f77c7f5978" containerName="mariadb-account-create-update" Dec 10 12:37:03 crc kubenswrapper[4689]: E1210 12:37:03.684317 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e43da96-bc95-4f10-ad40-c4c5c32fac22" containerName="ceilometer-central-agent" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.684333 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e43da96-bc95-4f10-ad40-c4c5c32fac22" containerName="ceilometer-central-agent" Dec 10 12:37:03 crc kubenswrapper[4689]: E1210 12:37:03.684347 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4d9408c-062f-45ad-a393-da20c66d7d40" containerName="mariadb-database-create" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.684354 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4d9408c-062f-45ad-a393-da20c66d7d40" containerName="mariadb-database-create" Dec 10 12:37:03 crc kubenswrapper[4689]: E1210 12:37:03.684374 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eddd02fa-d59c-407a-80d9-6dfe1066ac88" containerName="mariadb-database-create" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.684380 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="eddd02fa-d59c-407a-80d9-6dfe1066ac88" containerName="mariadb-database-create" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.684883 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e43da96-bc95-4f10-ad40-c4c5c32fac22" containerName="proxy-httpd" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.684907 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0eca891-5410-41d4-a578-61f77c7f5978" containerName="mariadb-account-create-update" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.684937 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c5ed288-dbbc-4c61-bd51-9c4b43375ad5" containerName="mariadb-account-create-update" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.684956 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfee700e-fe0c-4e0b-90f3-ee2741a787ba" containerName="mariadb-account-create-update" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.684987 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4d9408c-062f-45ad-a393-da20c66d7d40" containerName="mariadb-database-create" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.685009 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="271d3ee1-d2ae-41da-95bf-85a9c45dfad5" containerName="mariadb-database-create" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.685028 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e43da96-bc95-4f10-ad40-c4c5c32fac22" containerName="ceilometer-central-agent" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.685048 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="eddd02fa-d59c-407a-80d9-6dfe1066ac88" containerName="mariadb-database-create" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.685062 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e43da96-bc95-4f10-ad40-c4c5c32fac22" containerName="ceilometer-notification-agent" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.685076 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e43da96-bc95-4f10-ad40-c4c5c32fac22" containerName="sg-core" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.687842 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.687943 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.693262 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.694872 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.700587 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-run-httpd\") pod \"ceilometer-0\" (UID: \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\") " pod="openstack/ceilometer-0" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.700640 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-scripts\") pod \"ceilometer-0\" (UID: \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\") " pod="openstack/ceilometer-0" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.700668 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\") " pod="openstack/ceilometer-0" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.700696 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztvnw\" (UniqueName: \"kubernetes.io/projected/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-kube-api-access-ztvnw\") pod \"ceilometer-0\" (UID: \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\") " pod="openstack/ceilometer-0" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.700730 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-config-data\") pod \"ceilometer-0\" (UID: \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\") " pod="openstack/ceilometer-0" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.700777 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\") " pod="openstack/ceilometer-0" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.700821 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-log-httpd\") pod \"ceilometer-0\" (UID: \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\") " pod="openstack/ceilometer-0" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.798802 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-b6qlt"] Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.800089 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-b6qlt" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.802140 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e38a3466-6fce-418f-9db5-5da12b7fdf2b-scripts\") pod \"nova-cell0-conductor-db-sync-b6qlt\" (UID: \"e38a3466-6fce-418f-9db5-5da12b7fdf2b\") " pod="openstack/nova-cell0-conductor-db-sync-b6qlt" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.802178 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-run-httpd\") pod \"ceilometer-0\" (UID: \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\") " pod="openstack/ceilometer-0" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.802242 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-scripts\") pod \"ceilometer-0\" (UID: \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\") " pod="openstack/ceilometer-0" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.802264 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\") " pod="openstack/ceilometer-0" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.802288 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztvnw\" (UniqueName: \"kubernetes.io/projected/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-kube-api-access-ztvnw\") pod \"ceilometer-0\" (UID: \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\") " pod="openstack/ceilometer-0" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.802316 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e38a3466-6fce-418f-9db5-5da12b7fdf2b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-b6qlt\" (UID: \"e38a3466-6fce-418f-9db5-5da12b7fdf2b\") " pod="openstack/nova-cell0-conductor-db-sync-b6qlt" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.802346 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-config-data\") pod \"ceilometer-0\" (UID: \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\") " pod="openstack/ceilometer-0" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.802396 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\") " pod="openstack/ceilometer-0" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.802421 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e38a3466-6fce-418f-9db5-5da12b7fdf2b-config-data\") pod \"nova-cell0-conductor-db-sync-b6qlt\" (UID: \"e38a3466-6fce-418f-9db5-5da12b7fdf2b\") " pod="openstack/nova-cell0-conductor-db-sync-b6qlt" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.802460 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-log-httpd\") pod \"ceilometer-0\" (UID: \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\") " pod="openstack/ceilometer-0" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.802503 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8vrx\" (UniqueName: \"kubernetes.io/projected/e38a3466-6fce-418f-9db5-5da12b7fdf2b-kube-api-access-h8vrx\") pod \"nova-cell0-conductor-db-sync-b6qlt\" (UID: \"e38a3466-6fce-418f-9db5-5da12b7fdf2b\") " pod="openstack/nova-cell0-conductor-db-sync-b6qlt" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.802730 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-run-httpd\") pod \"ceilometer-0\" (UID: \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\") " pod="openstack/ceilometer-0" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.803106 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.803287 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.803708 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-log-httpd\") pod \"ceilometer-0\" (UID: \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\") " pod="openstack/ceilometer-0" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.806734 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\") " pod="openstack/ceilometer-0" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.808303 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-b6qlt"] Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.810271 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-z569c" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.810301 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-scripts\") pod \"ceilometer-0\" (UID: \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\") " pod="openstack/ceilometer-0" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.810797 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\") " pod="openstack/ceilometer-0" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.811891 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-config-data\") pod \"ceilometer-0\" (UID: \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\") " pod="openstack/ceilometer-0" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.820929 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztvnw\" (UniqueName: \"kubernetes.io/projected/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-kube-api-access-ztvnw\") pod \"ceilometer-0\" (UID: \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\") " pod="openstack/ceilometer-0" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.904104 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e38a3466-6fce-418f-9db5-5da12b7fdf2b-scripts\") pod \"nova-cell0-conductor-db-sync-b6qlt\" (UID: \"e38a3466-6fce-418f-9db5-5da12b7fdf2b\") " pod="openstack/nova-cell0-conductor-db-sync-b6qlt" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.904430 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e38a3466-6fce-418f-9db5-5da12b7fdf2b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-b6qlt\" (UID: \"e38a3466-6fce-418f-9db5-5da12b7fdf2b\") " pod="openstack/nova-cell0-conductor-db-sync-b6qlt" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.904481 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e38a3466-6fce-418f-9db5-5da12b7fdf2b-config-data\") pod \"nova-cell0-conductor-db-sync-b6qlt\" (UID: \"e38a3466-6fce-418f-9db5-5da12b7fdf2b\") " pod="openstack/nova-cell0-conductor-db-sync-b6qlt" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.904530 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8vrx\" (UniqueName: \"kubernetes.io/projected/e38a3466-6fce-418f-9db5-5da12b7fdf2b-kube-api-access-h8vrx\") pod \"nova-cell0-conductor-db-sync-b6qlt\" (UID: \"e38a3466-6fce-418f-9db5-5da12b7fdf2b\") " pod="openstack/nova-cell0-conductor-db-sync-b6qlt" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.908262 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e38a3466-6fce-418f-9db5-5da12b7fdf2b-config-data\") pod \"nova-cell0-conductor-db-sync-b6qlt\" (UID: \"e38a3466-6fce-418f-9db5-5da12b7fdf2b\") " pod="openstack/nova-cell0-conductor-db-sync-b6qlt" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.909568 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e38a3466-6fce-418f-9db5-5da12b7fdf2b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-b6qlt\" (UID: \"e38a3466-6fce-418f-9db5-5da12b7fdf2b\") " pod="openstack/nova-cell0-conductor-db-sync-b6qlt" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.910086 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e38a3466-6fce-418f-9db5-5da12b7fdf2b-scripts\") pod \"nova-cell0-conductor-db-sync-b6qlt\" (UID: \"e38a3466-6fce-418f-9db5-5da12b7fdf2b\") " pod="openstack/nova-cell0-conductor-db-sync-b6qlt" Dec 10 12:37:03 crc kubenswrapper[4689]: I1210 12:37:03.920342 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8vrx\" (UniqueName: \"kubernetes.io/projected/e38a3466-6fce-418f-9db5-5da12b7fdf2b-kube-api-access-h8vrx\") pod \"nova-cell0-conductor-db-sync-b6qlt\" (UID: \"e38a3466-6fce-418f-9db5-5da12b7fdf2b\") " pod="openstack/nova-cell0-conductor-db-sync-b6qlt" Dec 10 12:37:04 crc kubenswrapper[4689]: I1210 12:37:04.011156 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 12:37:04 crc kubenswrapper[4689]: I1210 12:37:04.125549 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-b6qlt" Dec 10 12:37:04 crc kubenswrapper[4689]: I1210 12:37:04.525276 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e43da96-bc95-4f10-ad40-c4c5c32fac22" path="/var/lib/kubelet/pods/1e43da96-bc95-4f10-ad40-c4c5c32fac22/volumes" Dec 10 12:37:04 crc kubenswrapper[4689]: I1210 12:37:04.529129 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:37:04 crc kubenswrapper[4689]: I1210 12:37:04.642966 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-b6qlt"] Dec 10 12:37:04 crc kubenswrapper[4689]: W1210 12:37:04.644657 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode38a3466_6fce_418f_9db5_5da12b7fdf2b.slice/crio-79d76bcaeb64762b23efd8067f95e2cf7baa5c13c49128cda1716250e63805f8 WatchSource:0}: Error finding container 79d76bcaeb64762b23efd8067f95e2cf7baa5c13c49128cda1716250e63805f8: Status 404 returned error can't find the container with id 79d76bcaeb64762b23efd8067f95e2cf7baa5c13c49128cda1716250e63805f8 Dec 10 12:37:05 crc kubenswrapper[4689]: I1210 12:37:05.304695 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f","Type":"ContainerStarted","Data":"23aeaf687bc665c4e225b15fc2bd25a5004d5164bfb24efdedf16ce23344b970"} Dec 10 12:37:05 crc kubenswrapper[4689]: I1210 12:37:05.307098 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-b6qlt" event={"ID":"e38a3466-6fce-418f-9db5-5da12b7fdf2b","Type":"ContainerStarted","Data":"79d76bcaeb64762b23efd8067f95e2cf7baa5c13c49128cda1716250e63805f8"} Dec 10 12:37:05 crc kubenswrapper[4689]: I1210 12:37:05.576825 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:37:06 crc kubenswrapper[4689]: I1210 12:37:06.319612 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f","Type":"ContainerStarted","Data":"3d93864792f60c4460d29bf6d581a81648bfa45f8e5bcf0f0c5dcf612475d11c"} Dec 10 12:37:07 crc kubenswrapper[4689]: I1210 12:37:07.166798 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 12:37:07 crc kubenswrapper[4689]: I1210 12:37:07.167147 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 12:37:07 crc kubenswrapper[4689]: I1210 12:37:07.205421 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/ironic-inspector-0" podUID="5a770426-b384-4fc7-acc0-fa42ff536a9b" containerName="ironic-inspector-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 12:37:07 crc kubenswrapper[4689]: I1210 12:37:07.331489 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f","Type":"ContainerStarted","Data":"ed7eec532d857d7b5ab6b165edf17db6598b1bb0893f029614e7b90217c227ac"} Dec 10 12:37:08 crc kubenswrapper[4689]: I1210 12:37:08.498670 4689 scope.go:117] "RemoveContainer" containerID="0d2bfb42888ef654c19089071408f308b195c617df024cf1bbe1e92e783d80fb" Dec 10 12:37:09 crc kubenswrapper[4689]: I1210 12:37:09.354834 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"5a770426-b384-4fc7-acc0-fa42ff536a9b","Type":"ContainerStarted","Data":"3f27331a7f40ed558f7fd1a2c4cf1d2e0d1a8d9a93f38f374549ecd91de89065"} Dec 10 12:37:12 crc kubenswrapper[4689]: I1210 12:37:12.199536 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Dec 10 12:37:14 crc kubenswrapper[4689]: I1210 12:37:14.403167 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-b6qlt" event={"ID":"e38a3466-6fce-418f-9db5-5da12b7fdf2b","Type":"ContainerStarted","Data":"31f49edacda83a9151fe34977daae05f77299a944bc9cafa24da31cd6aee7f83"} Dec 10 12:37:14 crc kubenswrapper[4689]: I1210 12:37:14.408651 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f","Type":"ContainerStarted","Data":"415fa72de64abba3b2e25fddfeed48f2e20d39f48fe962c9f5a5d9ba77ab24ae"} Dec 10 12:37:14 crc kubenswrapper[4689]: I1210 12:37:14.429786 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-b6qlt" podStartSLOduration=2.777194826 podStartE2EDuration="11.429767246s" podCreationTimestamp="2025-12-10 12:37:03 +0000 UTC" firstStartedPulling="2025-12-10 12:37:04.647050104 +0000 UTC m=+1292.435131242" lastFinishedPulling="2025-12-10 12:37:13.299622524 +0000 UTC m=+1301.087703662" observedRunningTime="2025-12-10 12:37:14.429538901 +0000 UTC m=+1302.217620039" watchObservedRunningTime="2025-12-10 12:37:14.429767246 +0000 UTC m=+1302.217848384" Dec 10 12:37:15 crc kubenswrapper[4689]: I1210 12:37:15.441148 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f","Type":"ContainerStarted","Data":"256f6d4b99b62b9d777833314af7c5e3614c45ffb55cf1bc7e37a33caccc041c"} Dec 10 12:37:15 crc kubenswrapper[4689]: I1210 12:37:15.441504 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 10 12:37:15 crc kubenswrapper[4689]: I1210 12:37:15.441549 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f" containerName="sg-core" containerID="cri-o://415fa72de64abba3b2e25fddfeed48f2e20d39f48fe962c9f5a5d9ba77ab24ae" gracePeriod=30 Dec 10 12:37:15 crc kubenswrapper[4689]: I1210 12:37:15.441623 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f" containerName="ceilometer-notification-agent" containerID="cri-o://ed7eec532d857d7b5ab6b165edf17db6598b1bb0893f029614e7b90217c227ac" gracePeriod=30 Dec 10 12:37:15 crc kubenswrapper[4689]: I1210 12:37:15.441713 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f" containerName="ceilometer-central-agent" containerID="cri-o://3d93864792f60c4460d29bf6d581a81648bfa45f8e5bcf0f0c5dcf612475d11c" gracePeriod=30 Dec 10 12:37:15 crc kubenswrapper[4689]: I1210 12:37:15.441547 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f" containerName="proxy-httpd" containerID="cri-o://256f6d4b99b62b9d777833314af7c5e3614c45ffb55cf1bc7e37a33caccc041c" gracePeriod=30 Dec 10 12:37:15 crc kubenswrapper[4689]: I1210 12:37:15.476120 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.888783467 podStartE2EDuration="12.476097608s" podCreationTimestamp="2025-12-10 12:37:03 +0000 UTC" firstStartedPulling="2025-12-10 12:37:04.537984789 +0000 UTC m=+1292.326065927" lastFinishedPulling="2025-12-10 12:37:15.12529893 +0000 UTC m=+1302.913380068" observedRunningTime="2025-12-10 12:37:15.47092294 +0000 UTC m=+1303.259004098" watchObservedRunningTime="2025-12-10 12:37:15.476097608 +0000 UTC m=+1303.264178766" Dec 10 12:37:15 crc kubenswrapper[4689]: E1210 12:37:15.956504 4689 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4cdde9e_99ce_4f9d_8699_0c1a1cd84d9f.slice/crio-3d93864792f60c4460d29bf6d581a81648bfa45f8e5bcf0f0c5dcf612475d11c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4cdde9e_99ce_4f9d_8699_0c1a1cd84d9f.slice/crio-conmon-3d93864792f60c4460d29bf6d581a81648bfa45f8e5bcf0f0c5dcf612475d11c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4cdde9e_99ce_4f9d_8699_0c1a1cd84d9f.slice/crio-ed7eec532d857d7b5ab6b165edf17db6598b1bb0893f029614e7b90217c227ac.scope\": RecentStats: unable to find data in memory cache]" Dec 10 12:37:16 crc kubenswrapper[4689]: I1210 12:37:16.466684 4689 generic.go:334] "Generic (PLEG): container finished" podID="e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f" containerID="415fa72de64abba3b2e25fddfeed48f2e20d39f48fe962c9f5a5d9ba77ab24ae" exitCode=2 Dec 10 12:37:16 crc kubenswrapper[4689]: I1210 12:37:16.466720 4689 generic.go:334] "Generic (PLEG): container finished" podID="e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f" containerID="ed7eec532d857d7b5ab6b165edf17db6598b1bb0893f029614e7b90217c227ac" exitCode=0 Dec 10 12:37:16 crc kubenswrapper[4689]: I1210 12:37:16.466728 4689 generic.go:334] "Generic (PLEG): container finished" podID="e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f" containerID="3d93864792f60c4460d29bf6d581a81648bfa45f8e5bcf0f0c5dcf612475d11c" exitCode=0 Dec 10 12:37:16 crc kubenswrapper[4689]: I1210 12:37:16.466772 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f","Type":"ContainerDied","Data":"415fa72de64abba3b2e25fddfeed48f2e20d39f48fe962c9f5a5d9ba77ab24ae"} Dec 10 12:37:16 crc kubenswrapper[4689]: I1210 12:37:16.466826 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f","Type":"ContainerDied","Data":"ed7eec532d857d7b5ab6b165edf17db6598b1bb0893f029614e7b90217c227ac"} Dec 10 12:37:16 crc kubenswrapper[4689]: I1210 12:37:16.466845 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f","Type":"ContainerDied","Data":"3d93864792f60c4460d29bf6d581a81648bfa45f8e5bcf0f0c5dcf612475d11c"} Dec 10 12:37:17 crc kubenswrapper[4689]: I1210 12:37:17.198429 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Dec 10 12:37:17 crc kubenswrapper[4689]: I1210 12:37:17.239865 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Dec 10 12:37:17 crc kubenswrapper[4689]: I1210 12:37:17.244109 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Dec 10 12:37:17 crc kubenswrapper[4689]: I1210 12:37:17.486240 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Dec 10 12:37:17 crc kubenswrapper[4689]: I1210 12:37:17.490893 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Dec 10 12:37:23 crc kubenswrapper[4689]: I1210 12:37:23.544780 4689 generic.go:334] "Generic (PLEG): container finished" podID="e38a3466-6fce-418f-9db5-5da12b7fdf2b" containerID="31f49edacda83a9151fe34977daae05f77299a944bc9cafa24da31cd6aee7f83" exitCode=0 Dec 10 12:37:23 crc kubenswrapper[4689]: I1210 12:37:23.544869 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-b6qlt" event={"ID":"e38a3466-6fce-418f-9db5-5da12b7fdf2b","Type":"ContainerDied","Data":"31f49edacda83a9151fe34977daae05f77299a944bc9cafa24da31cd6aee7f83"} Dec 10 12:37:24 crc kubenswrapper[4689]: I1210 12:37:24.951990 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-b6qlt" Dec 10 12:37:25 crc kubenswrapper[4689]: I1210 12:37:25.031445 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e38a3466-6fce-418f-9db5-5da12b7fdf2b-scripts\") pod \"e38a3466-6fce-418f-9db5-5da12b7fdf2b\" (UID: \"e38a3466-6fce-418f-9db5-5da12b7fdf2b\") " Dec 10 12:37:25 crc kubenswrapper[4689]: I1210 12:37:25.031492 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8vrx\" (UniqueName: \"kubernetes.io/projected/e38a3466-6fce-418f-9db5-5da12b7fdf2b-kube-api-access-h8vrx\") pod \"e38a3466-6fce-418f-9db5-5da12b7fdf2b\" (UID: \"e38a3466-6fce-418f-9db5-5da12b7fdf2b\") " Dec 10 12:37:25 crc kubenswrapper[4689]: I1210 12:37:25.031511 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e38a3466-6fce-418f-9db5-5da12b7fdf2b-config-data\") pod \"e38a3466-6fce-418f-9db5-5da12b7fdf2b\" (UID: \"e38a3466-6fce-418f-9db5-5da12b7fdf2b\") " Dec 10 12:37:25 crc kubenswrapper[4689]: I1210 12:37:25.032627 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e38a3466-6fce-418f-9db5-5da12b7fdf2b-combined-ca-bundle\") pod \"e38a3466-6fce-418f-9db5-5da12b7fdf2b\" (UID: \"e38a3466-6fce-418f-9db5-5da12b7fdf2b\") " Dec 10 12:37:25 crc kubenswrapper[4689]: I1210 12:37:25.037937 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e38a3466-6fce-418f-9db5-5da12b7fdf2b-scripts" (OuterVolumeSpecName: "scripts") pod "e38a3466-6fce-418f-9db5-5da12b7fdf2b" (UID: "e38a3466-6fce-418f-9db5-5da12b7fdf2b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:37:25 crc kubenswrapper[4689]: I1210 12:37:25.038822 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e38a3466-6fce-418f-9db5-5da12b7fdf2b-kube-api-access-h8vrx" (OuterVolumeSpecName: "kube-api-access-h8vrx") pod "e38a3466-6fce-418f-9db5-5da12b7fdf2b" (UID: "e38a3466-6fce-418f-9db5-5da12b7fdf2b"). InnerVolumeSpecName "kube-api-access-h8vrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:37:25 crc kubenswrapper[4689]: I1210 12:37:25.060899 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e38a3466-6fce-418f-9db5-5da12b7fdf2b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e38a3466-6fce-418f-9db5-5da12b7fdf2b" (UID: "e38a3466-6fce-418f-9db5-5da12b7fdf2b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:37:25 crc kubenswrapper[4689]: I1210 12:37:25.068744 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e38a3466-6fce-418f-9db5-5da12b7fdf2b-config-data" (OuterVolumeSpecName: "config-data") pod "e38a3466-6fce-418f-9db5-5da12b7fdf2b" (UID: "e38a3466-6fce-418f-9db5-5da12b7fdf2b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:37:25 crc kubenswrapper[4689]: I1210 12:37:25.134471 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e38a3466-6fce-418f-9db5-5da12b7fdf2b-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:25 crc kubenswrapper[4689]: I1210 12:37:25.134506 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8vrx\" (UniqueName: \"kubernetes.io/projected/e38a3466-6fce-418f-9db5-5da12b7fdf2b-kube-api-access-h8vrx\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:25 crc kubenswrapper[4689]: I1210 12:37:25.134517 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e38a3466-6fce-418f-9db5-5da12b7fdf2b-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:25 crc kubenswrapper[4689]: I1210 12:37:25.134526 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e38a3466-6fce-418f-9db5-5da12b7fdf2b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:25 crc kubenswrapper[4689]: I1210 12:37:25.565001 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-b6qlt" event={"ID":"e38a3466-6fce-418f-9db5-5da12b7fdf2b","Type":"ContainerDied","Data":"79d76bcaeb64762b23efd8067f95e2cf7baa5c13c49128cda1716250e63805f8"} Dec 10 12:37:25 crc kubenswrapper[4689]: I1210 12:37:25.565262 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79d76bcaeb64762b23efd8067f95e2cf7baa5c13c49128cda1716250e63805f8" Dec 10 12:37:25 crc kubenswrapper[4689]: I1210 12:37:25.565096 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-b6qlt" Dec 10 12:37:25 crc kubenswrapper[4689]: I1210 12:37:25.742037 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 10 12:37:25 crc kubenswrapper[4689]: E1210 12:37:25.742549 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e38a3466-6fce-418f-9db5-5da12b7fdf2b" containerName="nova-cell0-conductor-db-sync" Dec 10 12:37:25 crc kubenswrapper[4689]: I1210 12:37:25.742572 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="e38a3466-6fce-418f-9db5-5da12b7fdf2b" containerName="nova-cell0-conductor-db-sync" Dec 10 12:37:25 crc kubenswrapper[4689]: I1210 12:37:25.742809 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="e38a3466-6fce-418f-9db5-5da12b7fdf2b" containerName="nova-cell0-conductor-db-sync" Dec 10 12:37:25 crc kubenswrapper[4689]: I1210 12:37:25.744298 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 10 12:37:25 crc kubenswrapper[4689]: I1210 12:37:25.746512 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-z569c" Dec 10 12:37:25 crc kubenswrapper[4689]: I1210 12:37:25.746707 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 10 12:37:25 crc kubenswrapper[4689]: I1210 12:37:25.752117 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 10 12:37:25 crc kubenswrapper[4689]: I1210 12:37:25.849460 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e65d1ea-f92e-4a74-944d-f9fcc1f1ae34-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3e65d1ea-f92e-4a74-944d-f9fcc1f1ae34\") " pod="openstack/nova-cell0-conductor-0" Dec 10 12:37:25 crc kubenswrapper[4689]: I1210 12:37:25.849582 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfp8c\" (UniqueName: \"kubernetes.io/projected/3e65d1ea-f92e-4a74-944d-f9fcc1f1ae34-kube-api-access-mfp8c\") pod \"nova-cell0-conductor-0\" (UID: \"3e65d1ea-f92e-4a74-944d-f9fcc1f1ae34\") " pod="openstack/nova-cell0-conductor-0" Dec 10 12:37:25 crc kubenswrapper[4689]: I1210 12:37:25.849617 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e65d1ea-f92e-4a74-944d-f9fcc1f1ae34-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3e65d1ea-f92e-4a74-944d-f9fcc1f1ae34\") " pod="openstack/nova-cell0-conductor-0" Dec 10 12:37:25 crc kubenswrapper[4689]: I1210 12:37:25.951671 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfp8c\" (UniqueName: \"kubernetes.io/projected/3e65d1ea-f92e-4a74-944d-f9fcc1f1ae34-kube-api-access-mfp8c\") pod \"nova-cell0-conductor-0\" (UID: \"3e65d1ea-f92e-4a74-944d-f9fcc1f1ae34\") " pod="openstack/nova-cell0-conductor-0" Dec 10 12:37:25 crc kubenswrapper[4689]: I1210 12:37:25.951725 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e65d1ea-f92e-4a74-944d-f9fcc1f1ae34-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3e65d1ea-f92e-4a74-944d-f9fcc1f1ae34\") " pod="openstack/nova-cell0-conductor-0" Dec 10 12:37:25 crc kubenswrapper[4689]: I1210 12:37:25.951848 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e65d1ea-f92e-4a74-944d-f9fcc1f1ae34-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3e65d1ea-f92e-4a74-944d-f9fcc1f1ae34\") " pod="openstack/nova-cell0-conductor-0" Dec 10 12:37:25 crc kubenswrapper[4689]: I1210 12:37:25.956491 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e65d1ea-f92e-4a74-944d-f9fcc1f1ae34-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3e65d1ea-f92e-4a74-944d-f9fcc1f1ae34\") " pod="openstack/nova-cell0-conductor-0" Dec 10 12:37:25 crc kubenswrapper[4689]: I1210 12:37:25.958788 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e65d1ea-f92e-4a74-944d-f9fcc1f1ae34-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3e65d1ea-f92e-4a74-944d-f9fcc1f1ae34\") " pod="openstack/nova-cell0-conductor-0" Dec 10 12:37:25 crc kubenswrapper[4689]: I1210 12:37:25.970324 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfp8c\" (UniqueName: \"kubernetes.io/projected/3e65d1ea-f92e-4a74-944d-f9fcc1f1ae34-kube-api-access-mfp8c\") pod \"nova-cell0-conductor-0\" (UID: \"3e65d1ea-f92e-4a74-944d-f9fcc1f1ae34\") " pod="openstack/nova-cell0-conductor-0" Dec 10 12:37:26 crc kubenswrapper[4689]: I1210 12:37:26.062156 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 10 12:37:26 crc kubenswrapper[4689]: I1210 12:37:26.558895 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 10 12:37:26 crc kubenswrapper[4689]: I1210 12:37:26.577375 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3e65d1ea-f92e-4a74-944d-f9fcc1f1ae34","Type":"ContainerStarted","Data":"82e3a8dfd137bf79134bfd4d06dcdd13ee20c6cf562515fd045ba701452b6f2b"} Dec 10 12:37:27 crc kubenswrapper[4689]: I1210 12:37:27.586847 4689 generic.go:334] "Generic (PLEG): container finished" podID="23e46f1d-5919-4baa-aeef-1364104b63fb" containerID="aa27da8c4e7a78e649c0e949521a043b34fd5672b0595aacffd2fa4d17540789" exitCode=0 Dec 10 12:37:27 crc kubenswrapper[4689]: I1210 12:37:27.587180 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"23e46f1d-5919-4baa-aeef-1364104b63fb","Type":"ContainerDied","Data":"aa27da8c4e7a78e649c0e949521a043b34fd5672b0595aacffd2fa4d17540789"} Dec 10 12:37:27 crc kubenswrapper[4689]: I1210 12:37:27.591348 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3e65d1ea-f92e-4a74-944d-f9fcc1f1ae34","Type":"ContainerStarted","Data":"75dac4a3cd209be11fb755b803b1834f1b364d9a55b82706db42fae5bf372c0e"} Dec 10 12:37:27 crc kubenswrapper[4689]: I1210 12:37:27.591648 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 10 12:37:27 crc kubenswrapper[4689]: I1210 12:37:27.643855 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.643836437 podStartE2EDuration="2.643836437s" podCreationTimestamp="2025-12-10 12:37:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:37:27.63668943 +0000 UTC m=+1315.424770558" watchObservedRunningTime="2025-12-10 12:37:27.643836437 +0000 UTC m=+1315.431917575" Dec 10 12:37:28 crc kubenswrapper[4689]: I1210 12:37:28.603799 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"23e46f1d-5919-4baa-aeef-1364104b63fb","Type":"ContainerStarted","Data":"3611dacbe7ab2635823db6fb429b407902d445643e7181971d4111b55b7cdf0a"} Dec 10 12:37:28 crc kubenswrapper[4689]: I1210 12:37:28.604357 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"23e46f1d-5919-4baa-aeef-1364104b63fb","Type":"ContainerStarted","Data":"fe87b5616a11b2ce0964ef9573ce69d0a4d4d3fb372390e666a0ce3a4faac4e9"} Dec 10 12:37:29 crc kubenswrapper[4689]: I1210 12:37:29.617957 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"23e46f1d-5919-4baa-aeef-1364104b63fb","Type":"ContainerStarted","Data":"40272c370acfdb5febcfb93bb3f797bfa86e78060c58a396671a1a02b5932d37"} Dec 10 12:37:29 crc kubenswrapper[4689]: I1210 12:37:29.618271 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Dec 10 12:37:29 crc kubenswrapper[4689]: I1210 12:37:29.659286 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-conductor-0" podStartSLOduration=60.538991661 podStartE2EDuration="1m49.659265693s" podCreationTimestamp="2025-12-10 12:35:40 +0000 UTC" firstStartedPulling="2025-12-10 12:35:45.452617506 +0000 UTC m=+1213.240698644" lastFinishedPulling="2025-12-10 12:36:34.572891538 +0000 UTC m=+1262.360972676" observedRunningTime="2025-12-10 12:37:29.652433044 +0000 UTC m=+1317.440514182" watchObservedRunningTime="2025-12-10 12:37:29.659265693 +0000 UTC m=+1317.447346831" Dec 10 12:37:29 crc kubenswrapper[4689]: I1210 12:37:29.915075 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-conductor-0" Dec 10 12:37:31 crc kubenswrapper[4689]: I1210 12:37:31.112927 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 10 12:37:31 crc kubenswrapper[4689]: I1210 12:37:31.210325 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-conductor-0" Dec 10 12:37:31 crc kubenswrapper[4689]: I1210 12:37:31.635443 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-p6g5l"] Dec 10 12:37:31 crc kubenswrapper[4689]: I1210 12:37:31.638210 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-p6g5l" Dec 10 12:37:31 crc kubenswrapper[4689]: I1210 12:37:31.648671 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-p6g5l"] Dec 10 12:37:31 crc kubenswrapper[4689]: I1210 12:37:31.690675 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 10 12:37:31 crc kubenswrapper[4689]: I1210 12:37:31.692580 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 10 12:37:31 crc kubenswrapper[4689]: I1210 12:37:31.793795 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c20f1556-95fd-4488-bb5b-5dda218a55bf-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-p6g5l\" (UID: \"c20f1556-95fd-4488-bb5b-5dda218a55bf\") " pod="openstack/nova-cell0-cell-mapping-p6g5l" Dec 10 12:37:31 crc kubenswrapper[4689]: I1210 12:37:31.794150 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c20f1556-95fd-4488-bb5b-5dda218a55bf-scripts\") pod \"nova-cell0-cell-mapping-p6g5l\" (UID: \"c20f1556-95fd-4488-bb5b-5dda218a55bf\") " pod="openstack/nova-cell0-cell-mapping-p6g5l" Dec 10 12:37:31 crc kubenswrapper[4689]: I1210 12:37:31.794206 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz7rm\" (UniqueName: \"kubernetes.io/projected/c20f1556-95fd-4488-bb5b-5dda218a55bf-kube-api-access-hz7rm\") pod \"nova-cell0-cell-mapping-p6g5l\" (UID: \"c20f1556-95fd-4488-bb5b-5dda218a55bf\") " pod="openstack/nova-cell0-cell-mapping-p6g5l" Dec 10 12:37:31 crc kubenswrapper[4689]: I1210 12:37:31.794222 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c20f1556-95fd-4488-bb5b-5dda218a55bf-config-data\") pod \"nova-cell0-cell-mapping-p6g5l\" (UID: \"c20f1556-95fd-4488-bb5b-5dda218a55bf\") " pod="openstack/nova-cell0-cell-mapping-p6g5l" Dec 10 12:37:31 crc kubenswrapper[4689]: I1210 12:37:31.838373 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 10 12:37:31 crc kubenswrapper[4689]: I1210 12:37:31.840304 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 12:37:31 crc kubenswrapper[4689]: I1210 12:37:31.862029 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 10 12:37:31 crc kubenswrapper[4689]: I1210 12:37:31.865765 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 10 12:37:31 crc kubenswrapper[4689]: I1210 12:37:31.897504 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c20f1556-95fd-4488-bb5b-5dda218a55bf-scripts\") pod \"nova-cell0-cell-mapping-p6g5l\" (UID: \"c20f1556-95fd-4488-bb5b-5dda218a55bf\") " pod="openstack/nova-cell0-cell-mapping-p6g5l" Dec 10 12:37:31 crc kubenswrapper[4689]: I1210 12:37:31.897563 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz7rm\" (UniqueName: \"kubernetes.io/projected/c20f1556-95fd-4488-bb5b-5dda218a55bf-kube-api-access-hz7rm\") pod \"nova-cell0-cell-mapping-p6g5l\" (UID: \"c20f1556-95fd-4488-bb5b-5dda218a55bf\") " pod="openstack/nova-cell0-cell-mapping-p6g5l" Dec 10 12:37:31 crc kubenswrapper[4689]: I1210 12:37:31.897589 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c20f1556-95fd-4488-bb5b-5dda218a55bf-config-data\") pod \"nova-cell0-cell-mapping-p6g5l\" (UID: \"c20f1556-95fd-4488-bb5b-5dda218a55bf\") " pod="openstack/nova-cell0-cell-mapping-p6g5l" Dec 10 12:37:31 crc kubenswrapper[4689]: I1210 12:37:31.897634 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f2f7795-fa0a-475a-987d-75bd7b505e1a-logs\") pod \"nova-api-0\" (UID: \"5f2f7795-fa0a-475a-987d-75bd7b505e1a\") " pod="openstack/nova-api-0" Dec 10 12:37:31 crc kubenswrapper[4689]: I1210 12:37:31.897651 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f2f7795-fa0a-475a-987d-75bd7b505e1a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5f2f7795-fa0a-475a-987d-75bd7b505e1a\") " pod="openstack/nova-api-0" Dec 10 12:37:31 crc kubenswrapper[4689]: I1210 12:37:31.897677 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qvzj\" (UniqueName: \"kubernetes.io/projected/5f2f7795-fa0a-475a-987d-75bd7b505e1a-kube-api-access-2qvzj\") pod \"nova-api-0\" (UID: \"5f2f7795-fa0a-475a-987d-75bd7b505e1a\") " pod="openstack/nova-api-0" Dec 10 12:37:31 crc kubenswrapper[4689]: I1210 12:37:31.897740 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f2f7795-fa0a-475a-987d-75bd7b505e1a-config-data\") pod \"nova-api-0\" (UID: \"5f2f7795-fa0a-475a-987d-75bd7b505e1a\") " pod="openstack/nova-api-0" Dec 10 12:37:31 crc kubenswrapper[4689]: I1210 12:37:31.897779 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c20f1556-95fd-4488-bb5b-5dda218a55bf-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-p6g5l\" (UID: \"c20f1556-95fd-4488-bb5b-5dda218a55bf\") " pod="openstack/nova-cell0-cell-mapping-p6g5l" Dec 10 12:37:31 crc kubenswrapper[4689]: I1210 12:37:31.932980 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz7rm\" (UniqueName: \"kubernetes.io/projected/c20f1556-95fd-4488-bb5b-5dda218a55bf-kube-api-access-hz7rm\") pod \"nova-cell0-cell-mapping-p6g5l\" (UID: \"c20f1556-95fd-4488-bb5b-5dda218a55bf\") " pod="openstack/nova-cell0-cell-mapping-p6g5l" Dec 10 12:37:31 crc kubenswrapper[4689]: I1210 12:37:31.935678 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c20f1556-95fd-4488-bb5b-5dda218a55bf-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-p6g5l\" (UID: \"c20f1556-95fd-4488-bb5b-5dda218a55bf\") " pod="openstack/nova-cell0-cell-mapping-p6g5l" Dec 10 12:37:31 crc kubenswrapper[4689]: I1210 12:37:31.941862 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c20f1556-95fd-4488-bb5b-5dda218a55bf-config-data\") pod \"nova-cell0-cell-mapping-p6g5l\" (UID: \"c20f1556-95fd-4488-bb5b-5dda218a55bf\") " pod="openstack/nova-cell0-cell-mapping-p6g5l" Dec 10 12:37:31 crc kubenswrapper[4689]: I1210 12:37:31.963849 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c20f1556-95fd-4488-bb5b-5dda218a55bf-scripts\") pod \"nova-cell0-cell-mapping-p6g5l\" (UID: \"c20f1556-95fd-4488-bb5b-5dda218a55bf\") " pod="openstack/nova-cell0-cell-mapping-p6g5l" Dec 10 12:37:31 crc kubenswrapper[4689]: I1210 12:37:31.968156 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 10 12:37:31 crc kubenswrapper[4689]: I1210 12:37:31.969685 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 12:37:31 crc kubenswrapper[4689]: I1210 12:37:31.971799 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 10 12:37:31 crc kubenswrapper[4689]: I1210 12:37:31.979311 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.007359 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-p6g5l" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.013494 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f2f7795-fa0a-475a-987d-75bd7b505e1a-config-data\") pod \"nova-api-0\" (UID: \"5f2f7795-fa0a-475a-987d-75bd7b505e1a\") " pod="openstack/nova-api-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.013776 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f2f7795-fa0a-475a-987d-75bd7b505e1a-logs\") pod \"nova-api-0\" (UID: \"5f2f7795-fa0a-475a-987d-75bd7b505e1a\") " pod="openstack/nova-api-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.013801 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f2f7795-fa0a-475a-987d-75bd7b505e1a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5f2f7795-fa0a-475a-987d-75bd7b505e1a\") " pod="openstack/nova-api-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.013866 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qvzj\" (UniqueName: \"kubernetes.io/projected/5f2f7795-fa0a-475a-987d-75bd7b505e1a-kube-api-access-2qvzj\") pod \"nova-api-0\" (UID: \"5f2f7795-fa0a-475a-987d-75bd7b505e1a\") " pod="openstack/nova-api-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.016856 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f2f7795-fa0a-475a-987d-75bd7b505e1a-logs\") pod \"nova-api-0\" (UID: \"5f2f7795-fa0a-475a-987d-75bd7b505e1a\") " pod="openstack/nova-api-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.021616 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f2f7795-fa0a-475a-987d-75bd7b505e1a-config-data\") pod \"nova-api-0\" (UID: \"5f2f7795-fa0a-475a-987d-75bd7b505e1a\") " pod="openstack/nova-api-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.042161 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f2f7795-fa0a-475a-987d-75bd7b505e1a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5f2f7795-fa0a-475a-987d-75bd7b505e1a\") " pod="openstack/nova-api-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.046332 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qvzj\" (UniqueName: \"kubernetes.io/projected/5f2f7795-fa0a-475a-987d-75bd7b505e1a-kube-api-access-2qvzj\") pod \"nova-api-0\" (UID: \"5f2f7795-fa0a-475a-987d-75bd7b505e1a\") " pod="openstack/nova-api-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.100394 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-sfjml"] Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.102955 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-sfjml" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.121411 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b154fa85-946d-4862-a6fe-a76029e72d58-logs\") pod \"nova-metadata-0\" (UID: \"b154fa85-946d-4862-a6fe-a76029e72d58\") " pod="openstack/nova-metadata-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.121778 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b154fa85-946d-4862-a6fe-a76029e72d58-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b154fa85-946d-4862-a6fe-a76029e72d58\") " pod="openstack/nova-metadata-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.121798 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b154fa85-946d-4862-a6fe-a76029e72d58-config-data\") pod \"nova-metadata-0\" (UID: \"b154fa85-946d-4862-a6fe-a76029e72d58\") " pod="openstack/nova-metadata-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.121812 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw5jc\" (UniqueName: \"kubernetes.io/projected/b154fa85-946d-4862-a6fe-a76029e72d58-kube-api-access-dw5jc\") pod \"nova-metadata-0\" (UID: \"b154fa85-946d-4862-a6fe-a76029e72d58\") " pod="openstack/nova-metadata-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.133942 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-sfjml"] Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.155161 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.156287 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.160578 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.161129 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.196073 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.223579 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b154fa85-946d-4862-a6fe-a76029e72d58-logs\") pod \"nova-metadata-0\" (UID: \"b154fa85-946d-4862-a6fe-a76029e72d58\") " pod="openstack/nova-metadata-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.223639 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/012adfdf-2b09-4e23-8124-064ad6c6f712-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-sfjml\" (UID: \"012adfdf-2b09-4e23-8124-064ad6c6f712\") " pod="openstack/dnsmasq-dns-845d6d6f59-sfjml" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.223833 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/012adfdf-2b09-4e23-8124-064ad6c6f712-config\") pod \"dnsmasq-dns-845d6d6f59-sfjml\" (UID: \"012adfdf-2b09-4e23-8124-064ad6c6f712\") " pod="openstack/dnsmasq-dns-845d6d6f59-sfjml" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.223929 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/012adfdf-2b09-4e23-8124-064ad6c6f712-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-sfjml\" (UID: \"012adfdf-2b09-4e23-8124-064ad6c6f712\") " pod="openstack/dnsmasq-dns-845d6d6f59-sfjml" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.223964 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqpsg\" (UniqueName: \"kubernetes.io/projected/012adfdf-2b09-4e23-8124-064ad6c6f712-kube-api-access-gqpsg\") pod \"dnsmasq-dns-845d6d6f59-sfjml\" (UID: \"012adfdf-2b09-4e23-8124-064ad6c6f712\") " pod="openstack/dnsmasq-dns-845d6d6f59-sfjml" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.224071 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/012adfdf-2b09-4e23-8124-064ad6c6f712-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-sfjml\" (UID: \"012adfdf-2b09-4e23-8124-064ad6c6f712\") " pod="openstack/dnsmasq-dns-845d6d6f59-sfjml" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.224088 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b154fa85-946d-4862-a6fe-a76029e72d58-logs\") pod \"nova-metadata-0\" (UID: \"b154fa85-946d-4862-a6fe-a76029e72d58\") " pod="openstack/nova-metadata-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.224097 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/012adfdf-2b09-4e23-8124-064ad6c6f712-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-sfjml\" (UID: \"012adfdf-2b09-4e23-8124-064ad6c6f712\") " pod="openstack/dnsmasq-dns-845d6d6f59-sfjml" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.224183 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b154fa85-946d-4862-a6fe-a76029e72d58-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b154fa85-946d-4862-a6fe-a76029e72d58\") " pod="openstack/nova-metadata-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.224225 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw5jc\" (UniqueName: \"kubernetes.io/projected/b154fa85-946d-4862-a6fe-a76029e72d58-kube-api-access-dw5jc\") pod \"nova-metadata-0\" (UID: \"b154fa85-946d-4862-a6fe-a76029e72d58\") " pod="openstack/nova-metadata-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.224249 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b154fa85-946d-4862-a6fe-a76029e72d58-config-data\") pod \"nova-metadata-0\" (UID: \"b154fa85-946d-4862-a6fe-a76029e72d58\") " pod="openstack/nova-metadata-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.232360 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b154fa85-946d-4862-a6fe-a76029e72d58-config-data\") pod \"nova-metadata-0\" (UID: \"b154fa85-946d-4862-a6fe-a76029e72d58\") " pod="openstack/nova-metadata-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.233332 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b154fa85-946d-4862-a6fe-a76029e72d58-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b154fa85-946d-4862-a6fe-a76029e72d58\") " pod="openstack/nova-metadata-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.243423 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw5jc\" (UniqueName: \"kubernetes.io/projected/b154fa85-946d-4862-a6fe-a76029e72d58-kube-api-access-dw5jc\") pod \"nova-metadata-0\" (UID: \"b154fa85-946d-4862-a6fe-a76029e72d58\") " pod="openstack/nova-metadata-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.327076 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/012adfdf-2b09-4e23-8124-064ad6c6f712-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-sfjml\" (UID: \"012adfdf-2b09-4e23-8124-064ad6c6f712\") " pod="openstack/dnsmasq-dns-845d6d6f59-sfjml" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.327393 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqpsg\" (UniqueName: \"kubernetes.io/projected/012adfdf-2b09-4e23-8124-064ad6c6f712-kube-api-access-gqpsg\") pod \"dnsmasq-dns-845d6d6f59-sfjml\" (UID: \"012adfdf-2b09-4e23-8124-064ad6c6f712\") " pod="openstack/dnsmasq-dns-845d6d6f59-sfjml" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.327722 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/012adfdf-2b09-4e23-8124-064ad6c6f712-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-sfjml\" (UID: \"012adfdf-2b09-4e23-8124-064ad6c6f712\") " pod="openstack/dnsmasq-dns-845d6d6f59-sfjml" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.327745 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c762e827-42f3-4b9e-ac55-e1d95d357278-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c762e827-42f3-4b9e-ac55-e1d95d357278\") " pod="openstack/nova-scheduler-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.327769 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/012adfdf-2b09-4e23-8124-064ad6c6f712-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-sfjml\" (UID: \"012adfdf-2b09-4e23-8124-064ad6c6f712\") " pod="openstack/dnsmasq-dns-845d6d6f59-sfjml" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.328627 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/012adfdf-2b09-4e23-8124-064ad6c6f712-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-sfjml\" (UID: \"012adfdf-2b09-4e23-8124-064ad6c6f712\") " pod="openstack/dnsmasq-dns-845d6d6f59-sfjml" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.328792 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/012adfdf-2b09-4e23-8124-064ad6c6f712-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-sfjml\" (UID: \"012adfdf-2b09-4e23-8124-064ad6c6f712\") " pod="openstack/dnsmasq-dns-845d6d6f59-sfjml" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.328836 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57x7m\" (UniqueName: \"kubernetes.io/projected/c762e827-42f3-4b9e-ac55-e1d95d357278-kube-api-access-57x7m\") pod \"nova-scheduler-0\" (UID: \"c762e827-42f3-4b9e-ac55-e1d95d357278\") " pod="openstack/nova-scheduler-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.328913 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/012adfdf-2b09-4e23-8124-064ad6c6f712-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-sfjml\" (UID: \"012adfdf-2b09-4e23-8124-064ad6c6f712\") " pod="openstack/dnsmasq-dns-845d6d6f59-sfjml" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.328928 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c762e827-42f3-4b9e-ac55-e1d95d357278-config-data\") pod \"nova-scheduler-0\" (UID: \"c762e827-42f3-4b9e-ac55-e1d95d357278\") " pod="openstack/nova-scheduler-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.329016 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/012adfdf-2b09-4e23-8124-064ad6c6f712-config\") pod \"dnsmasq-dns-845d6d6f59-sfjml\" (UID: \"012adfdf-2b09-4e23-8124-064ad6c6f712\") " pod="openstack/dnsmasq-dns-845d6d6f59-sfjml" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.329597 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/012adfdf-2b09-4e23-8124-064ad6c6f712-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-sfjml\" (UID: \"012adfdf-2b09-4e23-8124-064ad6c6f712\") " pod="openstack/dnsmasq-dns-845d6d6f59-sfjml" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.329734 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/012adfdf-2b09-4e23-8124-064ad6c6f712-config\") pod \"dnsmasq-dns-845d6d6f59-sfjml\" (UID: \"012adfdf-2b09-4e23-8124-064ad6c6f712\") " pod="openstack/dnsmasq-dns-845d6d6f59-sfjml" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.330464 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/012adfdf-2b09-4e23-8124-064ad6c6f712-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-sfjml\" (UID: \"012adfdf-2b09-4e23-8124-064ad6c6f712\") " pod="openstack/dnsmasq-dns-845d6d6f59-sfjml" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.360792 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqpsg\" (UniqueName: \"kubernetes.io/projected/012adfdf-2b09-4e23-8124-064ad6c6f712-kube-api-access-gqpsg\") pod \"dnsmasq-dns-845d6d6f59-sfjml\" (UID: \"012adfdf-2b09-4e23-8124-064ad6c6f712\") " pod="openstack/dnsmasq-dns-845d6d6f59-sfjml" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.428155 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.430262 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.432044 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c762e827-42f3-4b9e-ac55-e1d95d357278-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c762e827-42f3-4b9e-ac55-e1d95d357278\") " pod="openstack/nova-scheduler-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.432176 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57x7m\" (UniqueName: \"kubernetes.io/projected/c762e827-42f3-4b9e-ac55-e1d95d357278-kube-api-access-57x7m\") pod \"nova-scheduler-0\" (UID: \"c762e827-42f3-4b9e-ac55-e1d95d357278\") " pod="openstack/nova-scheduler-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.432222 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c762e827-42f3-4b9e-ac55-e1d95d357278-config-data\") pod \"nova-scheduler-0\" (UID: \"c762e827-42f3-4b9e-ac55-e1d95d357278\") " pod="openstack/nova-scheduler-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.432763 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.440176 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c762e827-42f3-4b9e-ac55-e1d95d357278-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c762e827-42f3-4b9e-ac55-e1d95d357278\") " pod="openstack/nova-scheduler-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.441141 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.442193 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c762e827-42f3-4b9e-ac55-e1d95d357278-config-data\") pod \"nova-scheduler-0\" (UID: \"c762e827-42f3-4b9e-ac55-e1d95d357278\") " pod="openstack/nova-scheduler-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.456061 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57x7m\" (UniqueName: \"kubernetes.io/projected/c762e827-42f3-4b9e-ac55-e1d95d357278-kube-api-access-57x7m\") pod \"nova-scheduler-0\" (UID: \"c762e827-42f3-4b9e-ac55-e1d95d357278\") " pod="openstack/nova-scheduler-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.492510 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.511725 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-sfjml" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.524780 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.535292 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fd9caf2-87f6-4732-aa65-32d2515071cc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9fd9caf2-87f6-4732-aa65-32d2515071cc\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.535361 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fd9caf2-87f6-4732-aa65-32d2515071cc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9fd9caf2-87f6-4732-aa65-32d2515071cc\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.535403 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxd4k\" (UniqueName: \"kubernetes.io/projected/9fd9caf2-87f6-4732-aa65-32d2515071cc-kube-api-access-nxd4k\") pod \"nova-cell1-novncproxy-0\" (UID: \"9fd9caf2-87f6-4732-aa65-32d2515071cc\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.640379 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fd9caf2-87f6-4732-aa65-32d2515071cc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9fd9caf2-87f6-4732-aa65-32d2515071cc\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.640807 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fd9caf2-87f6-4732-aa65-32d2515071cc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9fd9caf2-87f6-4732-aa65-32d2515071cc\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.640981 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxd4k\" (UniqueName: \"kubernetes.io/projected/9fd9caf2-87f6-4732-aa65-32d2515071cc-kube-api-access-nxd4k\") pod \"nova-cell1-novncproxy-0\" (UID: \"9fd9caf2-87f6-4732-aa65-32d2515071cc\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.643615 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.644939 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fd9caf2-87f6-4732-aa65-32d2515071cc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9fd9caf2-87f6-4732-aa65-32d2515071cc\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.654156 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fd9caf2-87f6-4732-aa65-32d2515071cc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9fd9caf2-87f6-4732-aa65-32d2515071cc\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.667088 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxd4k\" (UniqueName: \"kubernetes.io/projected/9fd9caf2-87f6-4732-aa65-32d2515071cc-kube-api-access-nxd4k\") pod \"nova-cell1-novncproxy-0\" (UID: \"9fd9caf2-87f6-4732-aa65-32d2515071cc\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.687101 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-njfxw"] Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.688752 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-njfxw" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.691582 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.691714 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.707670 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-njfxw"] Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.718080 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-p6g5l"] Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.770754 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.846541 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.847583 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/099d539b-5f0e-41e2-b344-d7c10c52cf16-scripts\") pod \"nova-cell1-conductor-db-sync-njfxw\" (UID: \"099d539b-5f0e-41e2-b344-d7c10c52cf16\") " pod="openstack/nova-cell1-conductor-db-sync-njfxw" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.847790 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/099d539b-5f0e-41e2-b344-d7c10c52cf16-config-data\") pod \"nova-cell1-conductor-db-sync-njfxw\" (UID: \"099d539b-5f0e-41e2-b344-d7c10c52cf16\") " pod="openstack/nova-cell1-conductor-db-sync-njfxw" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.848158 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/099d539b-5f0e-41e2-b344-d7c10c52cf16-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-njfxw\" (UID: \"099d539b-5f0e-41e2-b344-d7c10c52cf16\") " pod="openstack/nova-cell1-conductor-db-sync-njfxw" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.848206 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlzvt\" (UniqueName: \"kubernetes.io/projected/099d539b-5f0e-41e2-b344-d7c10c52cf16-kube-api-access-mlzvt\") pod \"nova-cell1-conductor-db-sync-njfxw\" (UID: \"099d539b-5f0e-41e2-b344-d7c10c52cf16\") " pod="openstack/nova-cell1-conductor-db-sync-njfxw" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.950576 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/099d539b-5f0e-41e2-b344-d7c10c52cf16-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-njfxw\" (UID: \"099d539b-5f0e-41e2-b344-d7c10c52cf16\") " pod="openstack/nova-cell1-conductor-db-sync-njfxw" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.951223 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlzvt\" (UniqueName: \"kubernetes.io/projected/099d539b-5f0e-41e2-b344-d7c10c52cf16-kube-api-access-mlzvt\") pod \"nova-cell1-conductor-db-sync-njfxw\" (UID: \"099d539b-5f0e-41e2-b344-d7c10c52cf16\") " pod="openstack/nova-cell1-conductor-db-sync-njfxw" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.951318 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/099d539b-5f0e-41e2-b344-d7c10c52cf16-scripts\") pod \"nova-cell1-conductor-db-sync-njfxw\" (UID: \"099d539b-5f0e-41e2-b344-d7c10c52cf16\") " pod="openstack/nova-cell1-conductor-db-sync-njfxw" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.951380 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/099d539b-5f0e-41e2-b344-d7c10c52cf16-config-data\") pod \"nova-cell1-conductor-db-sync-njfxw\" (UID: \"099d539b-5f0e-41e2-b344-d7c10c52cf16\") " pod="openstack/nova-cell1-conductor-db-sync-njfxw" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.959828 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/099d539b-5f0e-41e2-b344-d7c10c52cf16-config-data\") pod \"nova-cell1-conductor-db-sync-njfxw\" (UID: \"099d539b-5f0e-41e2-b344-d7c10c52cf16\") " pod="openstack/nova-cell1-conductor-db-sync-njfxw" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.963391 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/099d539b-5f0e-41e2-b344-d7c10c52cf16-scripts\") pod \"nova-cell1-conductor-db-sync-njfxw\" (UID: \"099d539b-5f0e-41e2-b344-d7c10c52cf16\") " pod="openstack/nova-cell1-conductor-db-sync-njfxw" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.979321 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlzvt\" (UniqueName: \"kubernetes.io/projected/099d539b-5f0e-41e2-b344-d7c10c52cf16-kube-api-access-mlzvt\") pod \"nova-cell1-conductor-db-sync-njfxw\" (UID: \"099d539b-5f0e-41e2-b344-d7c10c52cf16\") " pod="openstack/nova-cell1-conductor-db-sync-njfxw" Dec 10 12:37:32 crc kubenswrapper[4689]: I1210 12:37:32.985214 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/099d539b-5f0e-41e2-b344-d7c10c52cf16-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-njfxw\" (UID: \"099d539b-5f0e-41e2-b344-d7c10c52cf16\") " pod="openstack/nova-cell1-conductor-db-sync-njfxw" Dec 10 12:37:33 crc kubenswrapper[4689]: I1210 12:37:33.008894 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-njfxw" Dec 10 12:37:33 crc kubenswrapper[4689]: I1210 12:37:33.070588 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 12:37:33 crc kubenswrapper[4689]: W1210 12:37:33.220936 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc762e827_42f3_4b9e_ac55_e1d95d357278.slice/crio-ae4c4ebc5888814f2aec5b17b60c4038d0cc72bad7291652cfb5ca58850ba3a1 WatchSource:0}: Error finding container ae4c4ebc5888814f2aec5b17b60c4038d0cc72bad7291652cfb5ca58850ba3a1: Status 404 returned error can't find the container with id ae4c4ebc5888814f2aec5b17b60c4038d0cc72bad7291652cfb5ca58850ba3a1 Dec 10 12:37:33 crc kubenswrapper[4689]: I1210 12:37:33.225340 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 12:37:33 crc kubenswrapper[4689]: I1210 12:37:33.259564 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-sfjml"] Dec 10 12:37:33 crc kubenswrapper[4689]: I1210 12:37:33.415316 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 12:37:33 crc kubenswrapper[4689]: W1210 12:37:33.419550 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9fd9caf2_87f6_4732_aa65_32d2515071cc.slice/crio-bb265df8fa95c90ee36291e3f6e42584ed024d1b271726ddf3afc2879d78468d WatchSource:0}: Error finding container bb265df8fa95c90ee36291e3f6e42584ed024d1b271726ddf3afc2879d78468d: Status 404 returned error can't find the container with id bb265df8fa95c90ee36291e3f6e42584ed024d1b271726ddf3afc2879d78468d Dec 10 12:37:33 crc kubenswrapper[4689]: I1210 12:37:33.582106 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-njfxw"] Dec 10 12:37:33 crc kubenswrapper[4689]: W1210 12:37:33.584638 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod099d539b_5f0e_41e2_b344_d7c10c52cf16.slice/crio-640cc5f8a9047f9fcd7c599c9fa2f7810406e5cc9ec5403498c77246d2a769e2 WatchSource:0}: Error finding container 640cc5f8a9047f9fcd7c599c9fa2f7810406e5cc9ec5403498c77246d2a769e2: Status 404 returned error can't find the container with id 640cc5f8a9047f9fcd7c599c9fa2f7810406e5cc9ec5403498c77246d2a769e2 Dec 10 12:37:33 crc kubenswrapper[4689]: I1210 12:37:33.672947 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5f2f7795-fa0a-475a-987d-75bd7b505e1a","Type":"ContainerStarted","Data":"6fd8adac71de778762d0502e9ebf0e0407a759fc028ee13baa8696f4f0d45ce8"} Dec 10 12:37:33 crc kubenswrapper[4689]: I1210 12:37:33.675613 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-njfxw" event={"ID":"099d539b-5f0e-41e2-b344-d7c10c52cf16","Type":"ContainerStarted","Data":"640cc5f8a9047f9fcd7c599c9fa2f7810406e5cc9ec5403498c77246d2a769e2"} Dec 10 12:37:33 crc kubenswrapper[4689]: I1210 12:37:33.680830 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-p6g5l" event={"ID":"c20f1556-95fd-4488-bb5b-5dda218a55bf","Type":"ContainerStarted","Data":"53c4522104a48e2b0bfb326bb109e251b4f74d748f88b1a36811480dc9728aba"} Dec 10 12:37:33 crc kubenswrapper[4689]: I1210 12:37:33.680872 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-p6g5l" event={"ID":"c20f1556-95fd-4488-bb5b-5dda218a55bf","Type":"ContainerStarted","Data":"67dd08490a49b8c411c45e364e6737e4c58d5a57064257052095aa3f95ab1cac"} Dec 10 12:37:33 crc kubenswrapper[4689]: I1210 12:37:33.684442 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b154fa85-946d-4862-a6fe-a76029e72d58","Type":"ContainerStarted","Data":"0d81b361eba8647884eba248778b94f219a248c5cf68728031f5e5183e19ac83"} Dec 10 12:37:33 crc kubenswrapper[4689]: I1210 12:37:33.690601 4689 generic.go:334] "Generic (PLEG): container finished" podID="012adfdf-2b09-4e23-8124-064ad6c6f712" containerID="af1a8366266b178983ec8c98213bf6f85098adbe1a2a38e3fd52dc517b526724" exitCode=0 Dec 10 12:37:33 crc kubenswrapper[4689]: I1210 12:37:33.690645 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-sfjml" event={"ID":"012adfdf-2b09-4e23-8124-064ad6c6f712","Type":"ContainerDied","Data":"af1a8366266b178983ec8c98213bf6f85098adbe1a2a38e3fd52dc517b526724"} Dec 10 12:37:33 crc kubenswrapper[4689]: I1210 12:37:33.691018 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-sfjml" event={"ID":"012adfdf-2b09-4e23-8124-064ad6c6f712","Type":"ContainerStarted","Data":"57cbc83a5735c4e25f99f8c3844a652fd43b4179c83e9c8176f3d67cf644498c"} Dec 10 12:37:33 crc kubenswrapper[4689]: I1210 12:37:33.693851 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9fd9caf2-87f6-4732-aa65-32d2515071cc","Type":"ContainerStarted","Data":"bb265df8fa95c90ee36291e3f6e42584ed024d1b271726ddf3afc2879d78468d"} Dec 10 12:37:33 crc kubenswrapper[4689]: I1210 12:37:33.699173 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c762e827-42f3-4b9e-ac55-e1d95d357278","Type":"ContainerStarted","Data":"ae4c4ebc5888814f2aec5b17b60c4038d0cc72bad7291652cfb5ca58850ba3a1"} Dec 10 12:37:33 crc kubenswrapper[4689]: I1210 12:37:33.700765 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-p6g5l" podStartSLOduration=2.700743615 podStartE2EDuration="2.700743615s" podCreationTimestamp="2025-12-10 12:37:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:37:33.693512647 +0000 UTC m=+1321.481593785" watchObservedRunningTime="2025-12-10 12:37:33.700743615 +0000 UTC m=+1321.488824753" Dec 10 12:37:34 crc kubenswrapper[4689]: I1210 12:37:34.015553 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 12:37:34 crc kubenswrapper[4689]: I1210 12:37:34.711258 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-njfxw" event={"ID":"099d539b-5f0e-41e2-b344-d7c10c52cf16","Type":"ContainerStarted","Data":"fcfb339e401c5ebefdaa0ff873d1eb3d984c1f97a58c80b5051aa52851379d98"} Dec 10 12:37:34 crc kubenswrapper[4689]: I1210 12:37:34.715936 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-sfjml" event={"ID":"012adfdf-2b09-4e23-8124-064ad6c6f712","Type":"ContainerStarted","Data":"c7efc46bb352d890990352c44e0e73948ca465b242dea0b07686f1ad6edee81a"} Dec 10 12:37:34 crc kubenswrapper[4689]: I1210 12:37:34.716103 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-845d6d6f59-sfjml" Dec 10 12:37:34 crc kubenswrapper[4689]: I1210 12:37:34.727063 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-njfxw" podStartSLOduration=2.727049622 podStartE2EDuration="2.727049622s" podCreationTimestamp="2025-12-10 12:37:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:37:34.726372346 +0000 UTC m=+1322.514453484" watchObservedRunningTime="2025-12-10 12:37:34.727049622 +0000 UTC m=+1322.515130760" Dec 10 12:37:34 crc kubenswrapper[4689]: I1210 12:37:34.743621 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-845d6d6f59-sfjml" podStartSLOduration=3.7436058819999998 podStartE2EDuration="3.743605882s" podCreationTimestamp="2025-12-10 12:37:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:37:34.742471183 +0000 UTC m=+1322.530552321" watchObservedRunningTime="2025-12-10 12:37:34.743605882 +0000 UTC m=+1322.531687020" Dec 10 12:37:35 crc kubenswrapper[4689]: I1210 12:37:35.979120 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 12:37:36 crc kubenswrapper[4689]: I1210 12:37:36.018188 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 12:37:37 crc kubenswrapper[4689]: I1210 12:37:37.167223 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 12:37:37 crc kubenswrapper[4689]: I1210 12:37:37.167598 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 12:37:39 crc kubenswrapper[4689]: I1210 12:37:39.768324 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c762e827-42f3-4b9e-ac55-e1d95d357278","Type":"ContainerStarted","Data":"455d7b101740e4c69059cf46cbbea57d58d6bd0bf59b3b4f6f147a3fa22e20d4"} Dec 10 12:37:39 crc kubenswrapper[4689]: I1210 12:37:39.777482 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5f2f7795-fa0a-475a-987d-75bd7b505e1a","Type":"ContainerStarted","Data":"cf2db1fd406acedad348a136d8a1eb4ce593fab381e6c4fe5fa04a044256c4d0"} Dec 10 12:37:39 crc kubenswrapper[4689]: I1210 12:37:39.777522 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5f2f7795-fa0a-475a-987d-75bd7b505e1a","Type":"ContainerStarted","Data":"57373cd126f446f94568e7e4bfa44564163fa9799f7197eb1a500aee6dbe91b9"} Dec 10 12:37:39 crc kubenswrapper[4689]: I1210 12:37:39.780698 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b154fa85-946d-4862-a6fe-a76029e72d58","Type":"ContainerStarted","Data":"ea13418d5d5f72713456f0db9db97d66a4372180c2d5e8f590a548914a4f792a"} Dec 10 12:37:39 crc kubenswrapper[4689]: I1210 12:37:39.780750 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b154fa85-946d-4862-a6fe-a76029e72d58","Type":"ContainerStarted","Data":"dd5e83fae2177fb191d54a794535b99c732f3eb9124987b0ee5812237b7e6f04"} Dec 10 12:37:39 crc kubenswrapper[4689]: I1210 12:37:39.780897 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b154fa85-946d-4862-a6fe-a76029e72d58" containerName="nova-metadata-log" containerID="cri-o://dd5e83fae2177fb191d54a794535b99c732f3eb9124987b0ee5812237b7e6f04" gracePeriod=30 Dec 10 12:37:39 crc kubenswrapper[4689]: I1210 12:37:39.791633 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b154fa85-946d-4862-a6fe-a76029e72d58" containerName="nova-metadata-metadata" containerID="cri-o://ea13418d5d5f72713456f0db9db97d66a4372180c2d5e8f590a548914a4f792a" gracePeriod=30 Dec 10 12:37:39 crc kubenswrapper[4689]: I1210 12:37:39.798402 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9fd9caf2-87f6-4732-aa65-32d2515071cc","Type":"ContainerStarted","Data":"aa7c015778ba76a63c34de3212fc4df0c0bb586b4ba0f8643d420f70cfed0594"} Dec 10 12:37:39 crc kubenswrapper[4689]: I1210 12:37:39.798499 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="9fd9caf2-87f6-4732-aa65-32d2515071cc" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://aa7c015778ba76a63c34de3212fc4df0c0bb586b4ba0f8643d420f70cfed0594" gracePeriod=30 Dec 10 12:37:39 crc kubenswrapper[4689]: I1210 12:37:39.803356 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.48569868 podStartE2EDuration="7.803340184s" podCreationTimestamp="2025-12-10 12:37:32 +0000 UTC" firstStartedPulling="2025-12-10 12:37:33.23022369 +0000 UTC m=+1321.018304828" lastFinishedPulling="2025-12-10 12:37:38.547865194 +0000 UTC m=+1326.335946332" observedRunningTime="2025-12-10 12:37:39.793606863 +0000 UTC m=+1327.581688001" watchObservedRunningTime="2025-12-10 12:37:39.803340184 +0000 UTC m=+1327.591421342" Dec 10 12:37:39 crc kubenswrapper[4689]: I1210 12:37:39.834990 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.406963243 podStartE2EDuration="8.834942444s" podCreationTimestamp="2025-12-10 12:37:31 +0000 UTC" firstStartedPulling="2025-12-10 12:37:33.090472707 +0000 UTC m=+1320.878553845" lastFinishedPulling="2025-12-10 12:37:38.518451908 +0000 UTC m=+1326.306533046" observedRunningTime="2025-12-10 12:37:39.821137024 +0000 UTC m=+1327.609218222" watchObservedRunningTime="2025-12-10 12:37:39.834942444 +0000 UTC m=+1327.623023592" Dec 10 12:37:39 crc kubenswrapper[4689]: I1210 12:37:39.852537 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.192581778 podStartE2EDuration="8.852514249s" podCreationTimestamp="2025-12-10 12:37:31 +0000 UTC" firstStartedPulling="2025-12-10 12:37:32.85864737 +0000 UTC m=+1320.646728508" lastFinishedPulling="2025-12-10 12:37:38.518579841 +0000 UTC m=+1326.306660979" observedRunningTime="2025-12-10 12:37:39.841374173 +0000 UTC m=+1327.629455341" watchObservedRunningTime="2025-12-10 12:37:39.852514249 +0000 UTC m=+1327.640595407" Dec 10 12:37:39 crc kubenswrapper[4689]: I1210 12:37:39.859385 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.7585238309999998 podStartE2EDuration="7.859369318s" podCreationTimestamp="2025-12-10 12:37:32 +0000 UTC" firstStartedPulling="2025-12-10 12:37:33.421503547 +0000 UTC m=+1321.209584685" lastFinishedPulling="2025-12-10 12:37:38.522349014 +0000 UTC m=+1326.310430172" observedRunningTime="2025-12-10 12:37:39.856850396 +0000 UTC m=+1327.644931534" watchObservedRunningTime="2025-12-10 12:37:39.859369318 +0000 UTC m=+1327.647450466" Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.451375 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.536832 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b154fa85-946d-4862-a6fe-a76029e72d58-config-data\") pod \"b154fa85-946d-4862-a6fe-a76029e72d58\" (UID: \"b154fa85-946d-4862-a6fe-a76029e72d58\") " Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.537900 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b154fa85-946d-4862-a6fe-a76029e72d58-logs\") pod \"b154fa85-946d-4862-a6fe-a76029e72d58\" (UID: \"b154fa85-946d-4862-a6fe-a76029e72d58\") " Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.538417 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b154fa85-946d-4862-a6fe-a76029e72d58-combined-ca-bundle\") pod \"b154fa85-946d-4862-a6fe-a76029e72d58\" (UID: \"b154fa85-946d-4862-a6fe-a76029e72d58\") " Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.538717 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw5jc\" (UniqueName: \"kubernetes.io/projected/b154fa85-946d-4862-a6fe-a76029e72d58-kube-api-access-dw5jc\") pod \"b154fa85-946d-4862-a6fe-a76029e72d58\" (UID: \"b154fa85-946d-4862-a6fe-a76029e72d58\") " Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.538539 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b154fa85-946d-4862-a6fe-a76029e72d58-logs" (OuterVolumeSpecName: "logs") pod "b154fa85-946d-4862-a6fe-a76029e72d58" (UID: "b154fa85-946d-4862-a6fe-a76029e72d58"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.539760 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b154fa85-946d-4862-a6fe-a76029e72d58-logs\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.545992 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b154fa85-946d-4862-a6fe-a76029e72d58-kube-api-access-dw5jc" (OuterVolumeSpecName: "kube-api-access-dw5jc") pod "b154fa85-946d-4862-a6fe-a76029e72d58" (UID: "b154fa85-946d-4862-a6fe-a76029e72d58"). InnerVolumeSpecName "kube-api-access-dw5jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.565714 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b154fa85-946d-4862-a6fe-a76029e72d58-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b154fa85-946d-4862-a6fe-a76029e72d58" (UID: "b154fa85-946d-4862-a6fe-a76029e72d58"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.572119 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b154fa85-946d-4862-a6fe-a76029e72d58-config-data" (OuterVolumeSpecName: "config-data") pod "b154fa85-946d-4862-a6fe-a76029e72d58" (UID: "b154fa85-946d-4862-a6fe-a76029e72d58"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.641624 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b154fa85-946d-4862-a6fe-a76029e72d58-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.641659 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw5jc\" (UniqueName: \"kubernetes.io/projected/b154fa85-946d-4862-a6fe-a76029e72d58-kube-api-access-dw5jc\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.641672 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b154fa85-946d-4862-a6fe-a76029e72d58-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.807947 4689 generic.go:334] "Generic (PLEG): container finished" podID="c20f1556-95fd-4488-bb5b-5dda218a55bf" containerID="53c4522104a48e2b0bfb326bb109e251b4f74d748f88b1a36811480dc9728aba" exitCode=0 Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.808033 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-p6g5l" event={"ID":"c20f1556-95fd-4488-bb5b-5dda218a55bf","Type":"ContainerDied","Data":"53c4522104a48e2b0bfb326bb109e251b4f74d748f88b1a36811480dc9728aba"} Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.811021 4689 generic.go:334] "Generic (PLEG): container finished" podID="b154fa85-946d-4862-a6fe-a76029e72d58" containerID="ea13418d5d5f72713456f0db9db97d66a4372180c2d5e8f590a548914a4f792a" exitCode=0 Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.811043 4689 generic.go:334] "Generic (PLEG): container finished" podID="b154fa85-946d-4862-a6fe-a76029e72d58" containerID="dd5e83fae2177fb191d54a794535b99c732f3eb9124987b0ee5812237b7e6f04" exitCode=143 Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.812533 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.814425 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b154fa85-946d-4862-a6fe-a76029e72d58","Type":"ContainerDied","Data":"ea13418d5d5f72713456f0db9db97d66a4372180c2d5e8f590a548914a4f792a"} Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.814462 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b154fa85-946d-4862-a6fe-a76029e72d58","Type":"ContainerDied","Data":"dd5e83fae2177fb191d54a794535b99c732f3eb9124987b0ee5812237b7e6f04"} Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.814474 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b154fa85-946d-4862-a6fe-a76029e72d58","Type":"ContainerDied","Data":"0d81b361eba8647884eba248778b94f219a248c5cf68728031f5e5183e19ac83"} Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.814488 4689 scope.go:117] "RemoveContainer" containerID="ea13418d5d5f72713456f0db9db97d66a4372180c2d5e8f590a548914a4f792a" Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.844075 4689 scope.go:117] "RemoveContainer" containerID="dd5e83fae2177fb191d54a794535b99c732f3eb9124987b0ee5812237b7e6f04" Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.886447 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.886523 4689 scope.go:117] "RemoveContainer" containerID="ea13418d5d5f72713456f0db9db97d66a4372180c2d5e8f590a548914a4f792a" Dec 10 12:37:40 crc kubenswrapper[4689]: E1210 12:37:40.887057 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea13418d5d5f72713456f0db9db97d66a4372180c2d5e8f590a548914a4f792a\": container with ID starting with ea13418d5d5f72713456f0db9db97d66a4372180c2d5e8f590a548914a4f792a not found: ID does not exist" containerID="ea13418d5d5f72713456f0db9db97d66a4372180c2d5e8f590a548914a4f792a" Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.887098 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea13418d5d5f72713456f0db9db97d66a4372180c2d5e8f590a548914a4f792a"} err="failed to get container status \"ea13418d5d5f72713456f0db9db97d66a4372180c2d5e8f590a548914a4f792a\": rpc error: code = NotFound desc = could not find container \"ea13418d5d5f72713456f0db9db97d66a4372180c2d5e8f590a548914a4f792a\": container with ID starting with ea13418d5d5f72713456f0db9db97d66a4372180c2d5e8f590a548914a4f792a not found: ID does not exist" Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.887129 4689 scope.go:117] "RemoveContainer" containerID="dd5e83fae2177fb191d54a794535b99c732f3eb9124987b0ee5812237b7e6f04" Dec 10 12:37:40 crc kubenswrapper[4689]: E1210 12:37:40.887692 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd5e83fae2177fb191d54a794535b99c732f3eb9124987b0ee5812237b7e6f04\": container with ID starting with dd5e83fae2177fb191d54a794535b99c732f3eb9124987b0ee5812237b7e6f04 not found: ID does not exist" containerID="dd5e83fae2177fb191d54a794535b99c732f3eb9124987b0ee5812237b7e6f04" Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.887723 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd5e83fae2177fb191d54a794535b99c732f3eb9124987b0ee5812237b7e6f04"} err="failed to get container status \"dd5e83fae2177fb191d54a794535b99c732f3eb9124987b0ee5812237b7e6f04\": rpc error: code = NotFound desc = could not find container \"dd5e83fae2177fb191d54a794535b99c732f3eb9124987b0ee5812237b7e6f04\": container with ID starting with dd5e83fae2177fb191d54a794535b99c732f3eb9124987b0ee5812237b7e6f04 not found: ID does not exist" Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.887738 4689 scope.go:117] "RemoveContainer" containerID="ea13418d5d5f72713456f0db9db97d66a4372180c2d5e8f590a548914a4f792a" Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.888272 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea13418d5d5f72713456f0db9db97d66a4372180c2d5e8f590a548914a4f792a"} err="failed to get container status \"ea13418d5d5f72713456f0db9db97d66a4372180c2d5e8f590a548914a4f792a\": rpc error: code = NotFound desc = could not find container \"ea13418d5d5f72713456f0db9db97d66a4372180c2d5e8f590a548914a4f792a\": container with ID starting with ea13418d5d5f72713456f0db9db97d66a4372180c2d5e8f590a548914a4f792a not found: ID does not exist" Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.888348 4689 scope.go:117] "RemoveContainer" containerID="dd5e83fae2177fb191d54a794535b99c732f3eb9124987b0ee5812237b7e6f04" Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.888706 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd5e83fae2177fb191d54a794535b99c732f3eb9124987b0ee5812237b7e6f04"} err="failed to get container status \"dd5e83fae2177fb191d54a794535b99c732f3eb9124987b0ee5812237b7e6f04\": rpc error: code = NotFound desc = could not find container \"dd5e83fae2177fb191d54a794535b99c732f3eb9124987b0ee5812237b7e6f04\": container with ID starting with dd5e83fae2177fb191d54a794535b99c732f3eb9124987b0ee5812237b7e6f04 not found: ID does not exist" Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.896761 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.908552 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 10 12:37:40 crc kubenswrapper[4689]: E1210 12:37:40.910116 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b154fa85-946d-4862-a6fe-a76029e72d58" containerName="nova-metadata-metadata" Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.910143 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="b154fa85-946d-4862-a6fe-a76029e72d58" containerName="nova-metadata-metadata" Dec 10 12:37:40 crc kubenswrapper[4689]: E1210 12:37:40.910168 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b154fa85-946d-4862-a6fe-a76029e72d58" containerName="nova-metadata-log" Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.910175 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="b154fa85-946d-4862-a6fe-a76029e72d58" containerName="nova-metadata-log" Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.910478 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="b154fa85-946d-4862-a6fe-a76029e72d58" containerName="nova-metadata-metadata" Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.910509 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="b154fa85-946d-4862-a6fe-a76029e72d58" containerName="nova-metadata-log" Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.911955 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.916334 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.916491 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 10 12:37:40 crc kubenswrapper[4689]: I1210 12:37:40.920085 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 12:37:41 crc kubenswrapper[4689]: I1210 12:37:41.049099 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhs9v\" (UniqueName: \"kubernetes.io/projected/0746befd-89ee-4e07-84b6-cd21c72c4ce0-kube-api-access-fhs9v\") pod \"nova-metadata-0\" (UID: \"0746befd-89ee-4e07-84b6-cd21c72c4ce0\") " pod="openstack/nova-metadata-0" Dec 10 12:37:41 crc kubenswrapper[4689]: I1210 12:37:41.049148 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0746befd-89ee-4e07-84b6-cd21c72c4ce0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0746befd-89ee-4e07-84b6-cd21c72c4ce0\") " pod="openstack/nova-metadata-0" Dec 10 12:37:41 crc kubenswrapper[4689]: I1210 12:37:41.049190 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0746befd-89ee-4e07-84b6-cd21c72c4ce0-config-data\") pod \"nova-metadata-0\" (UID: \"0746befd-89ee-4e07-84b6-cd21c72c4ce0\") " pod="openstack/nova-metadata-0" Dec 10 12:37:41 crc kubenswrapper[4689]: I1210 12:37:41.049642 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0746befd-89ee-4e07-84b6-cd21c72c4ce0-logs\") pod \"nova-metadata-0\" (UID: \"0746befd-89ee-4e07-84b6-cd21c72c4ce0\") " pod="openstack/nova-metadata-0" Dec 10 12:37:41 crc kubenswrapper[4689]: I1210 12:37:41.049715 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0746befd-89ee-4e07-84b6-cd21c72c4ce0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0746befd-89ee-4e07-84b6-cd21c72c4ce0\") " pod="openstack/nova-metadata-0" Dec 10 12:37:41 crc kubenswrapper[4689]: I1210 12:37:41.152086 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0746befd-89ee-4e07-84b6-cd21c72c4ce0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0746befd-89ee-4e07-84b6-cd21c72c4ce0\") " pod="openstack/nova-metadata-0" Dec 10 12:37:41 crc kubenswrapper[4689]: I1210 12:37:41.152217 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0746befd-89ee-4e07-84b6-cd21c72c4ce0-config-data\") pod \"nova-metadata-0\" (UID: \"0746befd-89ee-4e07-84b6-cd21c72c4ce0\") " pod="openstack/nova-metadata-0" Dec 10 12:37:41 crc kubenswrapper[4689]: I1210 12:37:41.152337 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0746befd-89ee-4e07-84b6-cd21c72c4ce0-logs\") pod \"nova-metadata-0\" (UID: \"0746befd-89ee-4e07-84b6-cd21c72c4ce0\") " pod="openstack/nova-metadata-0" Dec 10 12:37:41 crc kubenswrapper[4689]: I1210 12:37:41.152381 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0746befd-89ee-4e07-84b6-cd21c72c4ce0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0746befd-89ee-4e07-84b6-cd21c72c4ce0\") " pod="openstack/nova-metadata-0" Dec 10 12:37:41 crc kubenswrapper[4689]: I1210 12:37:41.152658 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhs9v\" (UniqueName: \"kubernetes.io/projected/0746befd-89ee-4e07-84b6-cd21c72c4ce0-kube-api-access-fhs9v\") pod \"nova-metadata-0\" (UID: \"0746befd-89ee-4e07-84b6-cd21c72c4ce0\") " pod="openstack/nova-metadata-0" Dec 10 12:37:41 crc kubenswrapper[4689]: I1210 12:37:41.152996 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0746befd-89ee-4e07-84b6-cd21c72c4ce0-logs\") pod \"nova-metadata-0\" (UID: \"0746befd-89ee-4e07-84b6-cd21c72c4ce0\") " pod="openstack/nova-metadata-0" Dec 10 12:37:41 crc kubenswrapper[4689]: I1210 12:37:41.163833 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0746befd-89ee-4e07-84b6-cd21c72c4ce0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0746befd-89ee-4e07-84b6-cd21c72c4ce0\") " pod="openstack/nova-metadata-0" Dec 10 12:37:41 crc kubenswrapper[4689]: I1210 12:37:41.163893 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0746befd-89ee-4e07-84b6-cd21c72c4ce0-config-data\") pod \"nova-metadata-0\" (UID: \"0746befd-89ee-4e07-84b6-cd21c72c4ce0\") " pod="openstack/nova-metadata-0" Dec 10 12:37:41 crc kubenswrapper[4689]: I1210 12:37:41.170121 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0746befd-89ee-4e07-84b6-cd21c72c4ce0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0746befd-89ee-4e07-84b6-cd21c72c4ce0\") " pod="openstack/nova-metadata-0" Dec 10 12:37:41 crc kubenswrapper[4689]: I1210 12:37:41.172996 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhs9v\" (UniqueName: \"kubernetes.io/projected/0746befd-89ee-4e07-84b6-cd21c72c4ce0-kube-api-access-fhs9v\") pod \"nova-metadata-0\" (UID: \"0746befd-89ee-4e07-84b6-cd21c72c4ce0\") " pod="openstack/nova-metadata-0" Dec 10 12:37:41 crc kubenswrapper[4689]: I1210 12:37:41.231162 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 12:37:41 crc kubenswrapper[4689]: I1210 12:37:41.729394 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 12:37:41 crc kubenswrapper[4689]: I1210 12:37:41.821595 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0746befd-89ee-4e07-84b6-cd21c72c4ce0","Type":"ContainerStarted","Data":"da0429e220890d6efdaaa4f681741c9c782538ff31bdf07df7b32f0f60a2f493"} Dec 10 12:37:41 crc kubenswrapper[4689]: I1210 12:37:41.960150 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Dec 10 12:37:42 crc kubenswrapper[4689]: I1210 12:37:42.164699 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 10 12:37:42 crc kubenswrapper[4689]: I1210 12:37:42.165021 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 10 12:37:42 crc kubenswrapper[4689]: I1210 12:37:42.261885 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-p6g5l" Dec 10 12:37:42 crc kubenswrapper[4689]: I1210 12:37:42.375223 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hz7rm\" (UniqueName: \"kubernetes.io/projected/c20f1556-95fd-4488-bb5b-5dda218a55bf-kube-api-access-hz7rm\") pod \"c20f1556-95fd-4488-bb5b-5dda218a55bf\" (UID: \"c20f1556-95fd-4488-bb5b-5dda218a55bf\") " Dec 10 12:37:42 crc kubenswrapper[4689]: I1210 12:37:42.375509 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c20f1556-95fd-4488-bb5b-5dda218a55bf-scripts\") pod \"c20f1556-95fd-4488-bb5b-5dda218a55bf\" (UID: \"c20f1556-95fd-4488-bb5b-5dda218a55bf\") " Dec 10 12:37:42 crc kubenswrapper[4689]: I1210 12:37:42.375600 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c20f1556-95fd-4488-bb5b-5dda218a55bf-config-data\") pod \"c20f1556-95fd-4488-bb5b-5dda218a55bf\" (UID: \"c20f1556-95fd-4488-bb5b-5dda218a55bf\") " Dec 10 12:37:42 crc kubenswrapper[4689]: I1210 12:37:42.375756 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c20f1556-95fd-4488-bb5b-5dda218a55bf-combined-ca-bundle\") pod \"c20f1556-95fd-4488-bb5b-5dda218a55bf\" (UID: \"c20f1556-95fd-4488-bb5b-5dda218a55bf\") " Dec 10 12:37:42 crc kubenswrapper[4689]: I1210 12:37:42.390181 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c20f1556-95fd-4488-bb5b-5dda218a55bf-kube-api-access-hz7rm" (OuterVolumeSpecName: "kube-api-access-hz7rm") pod "c20f1556-95fd-4488-bb5b-5dda218a55bf" (UID: "c20f1556-95fd-4488-bb5b-5dda218a55bf"). InnerVolumeSpecName "kube-api-access-hz7rm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:37:42 crc kubenswrapper[4689]: I1210 12:37:42.390467 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c20f1556-95fd-4488-bb5b-5dda218a55bf-scripts" (OuterVolumeSpecName: "scripts") pod "c20f1556-95fd-4488-bb5b-5dda218a55bf" (UID: "c20f1556-95fd-4488-bb5b-5dda218a55bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:37:42 crc kubenswrapper[4689]: I1210 12:37:42.411670 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c20f1556-95fd-4488-bb5b-5dda218a55bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c20f1556-95fd-4488-bb5b-5dda218a55bf" (UID: "c20f1556-95fd-4488-bb5b-5dda218a55bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:37:42 crc kubenswrapper[4689]: I1210 12:37:42.421753 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c20f1556-95fd-4488-bb5b-5dda218a55bf-config-data" (OuterVolumeSpecName: "config-data") pod "c20f1556-95fd-4488-bb5b-5dda218a55bf" (UID: "c20f1556-95fd-4488-bb5b-5dda218a55bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:37:42 crc kubenswrapper[4689]: I1210 12:37:42.478737 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c20f1556-95fd-4488-bb5b-5dda218a55bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:42 crc kubenswrapper[4689]: I1210 12:37:42.478773 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hz7rm\" (UniqueName: \"kubernetes.io/projected/c20f1556-95fd-4488-bb5b-5dda218a55bf-kube-api-access-hz7rm\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:42 crc kubenswrapper[4689]: I1210 12:37:42.478790 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c20f1556-95fd-4488-bb5b-5dda218a55bf-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:42 crc kubenswrapper[4689]: I1210 12:37:42.478803 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c20f1556-95fd-4488-bb5b-5dda218a55bf-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:42 crc kubenswrapper[4689]: I1210 12:37:42.515723 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b154fa85-946d-4862-a6fe-a76029e72d58" path="/var/lib/kubelet/pods/b154fa85-946d-4862-a6fe-a76029e72d58/volumes" Dec 10 12:37:42 crc kubenswrapper[4689]: I1210 12:37:42.517572 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-845d6d6f59-sfjml" Dec 10 12:37:42 crc kubenswrapper[4689]: I1210 12:37:42.525308 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 10 12:37:42 crc kubenswrapper[4689]: I1210 12:37:42.525342 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 10 12:37:42 crc kubenswrapper[4689]: I1210 12:37:42.582800 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 10 12:37:42 crc kubenswrapper[4689]: I1210 12:37:42.634618 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-qlcgw"] Dec 10 12:37:42 crc kubenswrapper[4689]: I1210 12:37:42.634942 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5784cf869f-qlcgw" podUID="06cf7890-6267-472d-a29e-c766c8009eda" containerName="dnsmasq-dns" containerID="cri-o://bf7f61dc7e1f601c004bf9c012e400ab23d27e96d229fb76bb6a067a5b5ea13c" gracePeriod=10 Dec 10 12:37:42 crc kubenswrapper[4689]: I1210 12:37:42.770916 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:37:42 crc kubenswrapper[4689]: I1210 12:37:42.839768 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-p6g5l" event={"ID":"c20f1556-95fd-4488-bb5b-5dda218a55bf","Type":"ContainerDied","Data":"67dd08490a49b8c411c45e364e6737e4c58d5a57064257052095aa3f95ab1cac"} Dec 10 12:37:42 crc kubenswrapper[4689]: I1210 12:37:42.839819 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67dd08490a49b8c411c45e364e6737e4c58d5a57064257052095aa3f95ab1cac" Dec 10 12:37:42 crc kubenswrapper[4689]: I1210 12:37:42.839789 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-p6g5l" Dec 10 12:37:42 crc kubenswrapper[4689]: I1210 12:37:42.878071 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 10 12:37:42 crc kubenswrapper[4689]: I1210 12:37:42.983644 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 10 12:37:42 crc kubenswrapper[4689]: I1210 12:37:42.984098 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5f2f7795-fa0a-475a-987d-75bd7b505e1a" containerName="nova-api-log" containerID="cri-o://57373cd126f446f94568e7e4bfa44564163fa9799f7197eb1a500aee6dbe91b9" gracePeriod=30 Dec 10 12:37:42 crc kubenswrapper[4689]: I1210 12:37:42.984155 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5f2f7795-fa0a-475a-987d-75bd7b505e1a" containerName="nova-api-api" containerID="cri-o://cf2db1fd406acedad348a136d8a1eb4ce593fab381e6c4fe5fa04a044256c4d0" gracePeriod=30 Dec 10 12:37:42 crc kubenswrapper[4689]: I1210 12:37:42.988162 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5f2f7795-fa0a-475a-987d-75bd7b505e1a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": EOF" Dec 10 12:37:42 crc kubenswrapper[4689]: I1210 12:37:42.988162 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5f2f7795-fa0a-475a-987d-75bd7b505e1a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": EOF" Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.004830 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.525012 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-qlcgw" Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.526633 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.601095 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06cf7890-6267-472d-a29e-c766c8009eda-ovsdbserver-nb\") pod \"06cf7890-6267-472d-a29e-c766c8009eda\" (UID: \"06cf7890-6267-472d-a29e-c766c8009eda\") " Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.601157 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06cf7890-6267-472d-a29e-c766c8009eda-config\") pod \"06cf7890-6267-472d-a29e-c766c8009eda\" (UID: \"06cf7890-6267-472d-a29e-c766c8009eda\") " Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.601187 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-526qm\" (UniqueName: \"kubernetes.io/projected/06cf7890-6267-472d-a29e-c766c8009eda-kube-api-access-526qm\") pod \"06cf7890-6267-472d-a29e-c766c8009eda\" (UID: \"06cf7890-6267-472d-a29e-c766c8009eda\") " Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.601206 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06cf7890-6267-472d-a29e-c766c8009eda-ovsdbserver-sb\") pod \"06cf7890-6267-472d-a29e-c766c8009eda\" (UID: \"06cf7890-6267-472d-a29e-c766c8009eda\") " Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.601328 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06cf7890-6267-472d-a29e-c766c8009eda-dns-svc\") pod \"06cf7890-6267-472d-a29e-c766c8009eda\" (UID: \"06cf7890-6267-472d-a29e-c766c8009eda\") " Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.601350 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/06cf7890-6267-472d-a29e-c766c8009eda-dns-swift-storage-0\") pod \"06cf7890-6267-472d-a29e-c766c8009eda\" (UID: \"06cf7890-6267-472d-a29e-c766c8009eda\") " Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.606160 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06cf7890-6267-472d-a29e-c766c8009eda-kube-api-access-526qm" (OuterVolumeSpecName: "kube-api-access-526qm") pod "06cf7890-6267-472d-a29e-c766c8009eda" (UID: "06cf7890-6267-472d-a29e-c766c8009eda"). InnerVolumeSpecName "kube-api-access-526qm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.662418 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06cf7890-6267-472d-a29e-c766c8009eda-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "06cf7890-6267-472d-a29e-c766c8009eda" (UID: "06cf7890-6267-472d-a29e-c766c8009eda"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.663238 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06cf7890-6267-472d-a29e-c766c8009eda-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "06cf7890-6267-472d-a29e-c766c8009eda" (UID: "06cf7890-6267-472d-a29e-c766c8009eda"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.665802 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06cf7890-6267-472d-a29e-c766c8009eda-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "06cf7890-6267-472d-a29e-c766c8009eda" (UID: "06cf7890-6267-472d-a29e-c766c8009eda"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.669232 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06cf7890-6267-472d-a29e-c766c8009eda-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "06cf7890-6267-472d-a29e-c766c8009eda" (UID: "06cf7890-6267-472d-a29e-c766c8009eda"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.673332 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06cf7890-6267-472d-a29e-c766c8009eda-config" (OuterVolumeSpecName: "config") pod "06cf7890-6267-472d-a29e-c766c8009eda" (UID: "06cf7890-6267-472d-a29e-c766c8009eda"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.703322 4689 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06cf7890-6267-472d-a29e-c766c8009eda-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.703364 4689 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/06cf7890-6267-472d-a29e-c766c8009eda-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.703380 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06cf7890-6267-472d-a29e-c766c8009eda-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.703391 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06cf7890-6267-472d-a29e-c766c8009eda-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.703405 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-526qm\" (UniqueName: \"kubernetes.io/projected/06cf7890-6267-472d-a29e-c766c8009eda-kube-api-access-526qm\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.703417 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06cf7890-6267-472d-a29e-c766c8009eda-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.849753 4689 generic.go:334] "Generic (PLEG): container finished" podID="06cf7890-6267-472d-a29e-c766c8009eda" containerID="bf7f61dc7e1f601c004bf9c012e400ab23d27e96d229fb76bb6a067a5b5ea13c" exitCode=0 Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.849801 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-qlcgw" event={"ID":"06cf7890-6267-472d-a29e-c766c8009eda","Type":"ContainerDied","Data":"bf7f61dc7e1f601c004bf9c012e400ab23d27e96d229fb76bb6a067a5b5ea13c"} Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.849839 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-qlcgw" Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.849861 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-qlcgw" event={"ID":"06cf7890-6267-472d-a29e-c766c8009eda","Type":"ContainerDied","Data":"0506722aa532b03763435ce21c8e92c1b7b188fca24690fc5e6df2cec0d51c7b"} Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.849892 4689 scope.go:117] "RemoveContainer" containerID="bf7f61dc7e1f601c004bf9c012e400ab23d27e96d229fb76bb6a067a5b5ea13c" Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.852152 4689 generic.go:334] "Generic (PLEG): container finished" podID="5f2f7795-fa0a-475a-987d-75bd7b505e1a" containerID="57373cd126f446f94568e7e4bfa44564163fa9799f7197eb1a500aee6dbe91b9" exitCode=143 Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.852218 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5f2f7795-fa0a-475a-987d-75bd7b505e1a","Type":"ContainerDied","Data":"57373cd126f446f94568e7e4bfa44564163fa9799f7197eb1a500aee6dbe91b9"} Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.858201 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0746befd-89ee-4e07-84b6-cd21c72c4ce0","Type":"ContainerStarted","Data":"a15dc911dbdd80f6cb6fd9a426552b0925c281f681f6ddddb554936219e07c3f"} Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.858281 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0746befd-89ee-4e07-84b6-cd21c72c4ce0","Type":"ContainerStarted","Data":"fc04d0c4c483edbd61ce63d1807980c2b168b325ccbe99df154992890a1e018e"} Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.858414 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0746befd-89ee-4e07-84b6-cd21c72c4ce0" containerName="nova-metadata-log" containerID="cri-o://fc04d0c4c483edbd61ce63d1807980c2b168b325ccbe99df154992890a1e018e" gracePeriod=30 Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.858639 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0746befd-89ee-4e07-84b6-cd21c72c4ce0" containerName="nova-metadata-metadata" containerID="cri-o://a15dc911dbdd80f6cb6fd9a426552b0925c281f681f6ddddb554936219e07c3f" gracePeriod=30 Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.880352 4689 scope.go:117] "RemoveContainer" containerID="267eb284ad06bf70942d1e41a1b7390db2de83ab535dd573be01e0fed5a24c22" Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.886167 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.886148828 podStartE2EDuration="3.886148828s" podCreationTimestamp="2025-12-10 12:37:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:37:43.880494149 +0000 UTC m=+1331.668575297" watchObservedRunningTime="2025-12-10 12:37:43.886148828 +0000 UTC m=+1331.674229966" Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.905249 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-qlcgw"] Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.911420 4689 scope.go:117] "RemoveContainer" containerID="bf7f61dc7e1f601c004bf9c012e400ab23d27e96d229fb76bb6a067a5b5ea13c" Dec 10 12:37:43 crc kubenswrapper[4689]: E1210 12:37:43.911867 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf7f61dc7e1f601c004bf9c012e400ab23d27e96d229fb76bb6a067a5b5ea13c\": container with ID starting with bf7f61dc7e1f601c004bf9c012e400ab23d27e96d229fb76bb6a067a5b5ea13c not found: ID does not exist" containerID="bf7f61dc7e1f601c004bf9c012e400ab23d27e96d229fb76bb6a067a5b5ea13c" Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.911911 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf7f61dc7e1f601c004bf9c012e400ab23d27e96d229fb76bb6a067a5b5ea13c"} err="failed to get container status \"bf7f61dc7e1f601c004bf9c012e400ab23d27e96d229fb76bb6a067a5b5ea13c\": rpc error: code = NotFound desc = could not find container \"bf7f61dc7e1f601c004bf9c012e400ab23d27e96d229fb76bb6a067a5b5ea13c\": container with ID starting with bf7f61dc7e1f601c004bf9c012e400ab23d27e96d229fb76bb6a067a5b5ea13c not found: ID does not exist" Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.911939 4689 scope.go:117] "RemoveContainer" containerID="267eb284ad06bf70942d1e41a1b7390db2de83ab535dd573be01e0fed5a24c22" Dec 10 12:37:43 crc kubenswrapper[4689]: E1210 12:37:43.912405 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"267eb284ad06bf70942d1e41a1b7390db2de83ab535dd573be01e0fed5a24c22\": container with ID starting with 267eb284ad06bf70942d1e41a1b7390db2de83ab535dd573be01e0fed5a24c22 not found: ID does not exist" containerID="267eb284ad06bf70942d1e41a1b7390db2de83ab535dd573be01e0fed5a24c22" Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.912446 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"267eb284ad06bf70942d1e41a1b7390db2de83ab535dd573be01e0fed5a24c22"} err="failed to get container status \"267eb284ad06bf70942d1e41a1b7390db2de83ab535dd573be01e0fed5a24c22\": rpc error: code = NotFound desc = could not find container \"267eb284ad06bf70942d1e41a1b7390db2de83ab535dd573be01e0fed5a24c22\": container with ID starting with 267eb284ad06bf70942d1e41a1b7390db2de83ab535dd573be01e0fed5a24c22 not found: ID does not exist" Dec 10 12:37:43 crc kubenswrapper[4689]: I1210 12:37:43.913014 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-qlcgw"] Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.441079 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.518299 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0746befd-89ee-4e07-84b6-cd21c72c4ce0-config-data\") pod \"0746befd-89ee-4e07-84b6-cd21c72c4ce0\" (UID: \"0746befd-89ee-4e07-84b6-cd21c72c4ce0\") " Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.518374 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0746befd-89ee-4e07-84b6-cd21c72c4ce0-combined-ca-bundle\") pod \"0746befd-89ee-4e07-84b6-cd21c72c4ce0\" (UID: \"0746befd-89ee-4e07-84b6-cd21c72c4ce0\") " Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.518413 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0746befd-89ee-4e07-84b6-cd21c72c4ce0-nova-metadata-tls-certs\") pod \"0746befd-89ee-4e07-84b6-cd21c72c4ce0\" (UID: \"0746befd-89ee-4e07-84b6-cd21c72c4ce0\") " Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.518446 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhs9v\" (UniqueName: \"kubernetes.io/projected/0746befd-89ee-4e07-84b6-cd21c72c4ce0-kube-api-access-fhs9v\") pod \"0746befd-89ee-4e07-84b6-cd21c72c4ce0\" (UID: \"0746befd-89ee-4e07-84b6-cd21c72c4ce0\") " Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.518468 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0746befd-89ee-4e07-84b6-cd21c72c4ce0-logs\") pod \"0746befd-89ee-4e07-84b6-cd21c72c4ce0\" (UID: \"0746befd-89ee-4e07-84b6-cd21c72c4ce0\") " Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.519419 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0746befd-89ee-4e07-84b6-cd21c72c4ce0-logs" (OuterVolumeSpecName: "logs") pod "0746befd-89ee-4e07-84b6-cd21c72c4ce0" (UID: "0746befd-89ee-4e07-84b6-cd21c72c4ce0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.524731 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0746befd-89ee-4e07-84b6-cd21c72c4ce0-kube-api-access-fhs9v" (OuterVolumeSpecName: "kube-api-access-fhs9v") pod "0746befd-89ee-4e07-84b6-cd21c72c4ce0" (UID: "0746befd-89ee-4e07-84b6-cd21c72c4ce0"). InnerVolumeSpecName "kube-api-access-fhs9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.528770 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06cf7890-6267-472d-a29e-c766c8009eda" path="/var/lib/kubelet/pods/06cf7890-6267-472d-a29e-c766c8009eda/volumes" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.546312 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0746befd-89ee-4e07-84b6-cd21c72c4ce0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0746befd-89ee-4e07-84b6-cd21c72c4ce0" (UID: "0746befd-89ee-4e07-84b6-cd21c72c4ce0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.567269 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0746befd-89ee-4e07-84b6-cd21c72c4ce0-config-data" (OuterVolumeSpecName: "config-data") pod "0746befd-89ee-4e07-84b6-cd21c72c4ce0" (UID: "0746befd-89ee-4e07-84b6-cd21c72c4ce0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.593850 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0746befd-89ee-4e07-84b6-cd21c72c4ce0-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "0746befd-89ee-4e07-84b6-cd21c72c4ce0" (UID: "0746befd-89ee-4e07-84b6-cd21c72c4ce0"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.621159 4689 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0746befd-89ee-4e07-84b6-cd21c72c4ce0-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.621222 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhs9v\" (UniqueName: \"kubernetes.io/projected/0746befd-89ee-4e07-84b6-cd21c72c4ce0-kube-api-access-fhs9v\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.621241 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0746befd-89ee-4e07-84b6-cd21c72c4ce0-logs\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.621261 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0746befd-89ee-4e07-84b6-cd21c72c4ce0-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.621279 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0746befd-89ee-4e07-84b6-cd21c72c4ce0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.872257 4689 generic.go:334] "Generic (PLEG): container finished" podID="0746befd-89ee-4e07-84b6-cd21c72c4ce0" containerID="a15dc911dbdd80f6cb6fd9a426552b0925c281f681f6ddddb554936219e07c3f" exitCode=0 Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.872295 4689 generic.go:334] "Generic (PLEG): container finished" podID="0746befd-89ee-4e07-84b6-cd21c72c4ce0" containerID="fc04d0c4c483edbd61ce63d1807980c2b168b325ccbe99df154992890a1e018e" exitCode=143 Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.872369 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0746befd-89ee-4e07-84b6-cd21c72c4ce0","Type":"ContainerDied","Data":"a15dc911dbdd80f6cb6fd9a426552b0925c281f681f6ddddb554936219e07c3f"} Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.872428 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0746befd-89ee-4e07-84b6-cd21c72c4ce0","Type":"ContainerDied","Data":"fc04d0c4c483edbd61ce63d1807980c2b168b325ccbe99df154992890a1e018e"} Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.872442 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0746befd-89ee-4e07-84b6-cd21c72c4ce0","Type":"ContainerDied","Data":"da0429e220890d6efdaaa4f681741c9c782538ff31bdf07df7b32f0f60a2f493"} Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.872438 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c762e827-42f3-4b9e-ac55-e1d95d357278" containerName="nova-scheduler-scheduler" containerID="cri-o://455d7b101740e4c69059cf46cbbea57d58d6bd0bf59b3b4f6f147a3fa22e20d4" gracePeriod=30 Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.872536 4689 scope.go:117] "RemoveContainer" containerID="a15dc911dbdd80f6cb6fd9a426552b0925c281f681f6ddddb554936219e07c3f" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.872297 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.893814 4689 scope.go:117] "RemoveContainer" containerID="fc04d0c4c483edbd61ce63d1807980c2b168b325ccbe99df154992890a1e018e" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.904635 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.922955 4689 scope.go:117] "RemoveContainer" containerID="a15dc911dbdd80f6cb6fd9a426552b0925c281f681f6ddddb554936219e07c3f" Dec 10 12:37:44 crc kubenswrapper[4689]: E1210 12:37:44.924088 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a15dc911dbdd80f6cb6fd9a426552b0925c281f681f6ddddb554936219e07c3f\": container with ID starting with a15dc911dbdd80f6cb6fd9a426552b0925c281f681f6ddddb554936219e07c3f not found: ID does not exist" containerID="a15dc911dbdd80f6cb6fd9a426552b0925c281f681f6ddddb554936219e07c3f" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.924495 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a15dc911dbdd80f6cb6fd9a426552b0925c281f681f6ddddb554936219e07c3f"} err="failed to get container status \"a15dc911dbdd80f6cb6fd9a426552b0925c281f681f6ddddb554936219e07c3f\": rpc error: code = NotFound desc = could not find container \"a15dc911dbdd80f6cb6fd9a426552b0925c281f681f6ddddb554936219e07c3f\": container with ID starting with a15dc911dbdd80f6cb6fd9a426552b0925c281f681f6ddddb554936219e07c3f not found: ID does not exist" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.924638 4689 scope.go:117] "RemoveContainer" containerID="fc04d0c4c483edbd61ce63d1807980c2b168b325ccbe99df154992890a1e018e" Dec 10 12:37:44 crc kubenswrapper[4689]: E1210 12:37:44.925257 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc04d0c4c483edbd61ce63d1807980c2b168b325ccbe99df154992890a1e018e\": container with ID starting with fc04d0c4c483edbd61ce63d1807980c2b168b325ccbe99df154992890a1e018e not found: ID does not exist" containerID="fc04d0c4c483edbd61ce63d1807980c2b168b325ccbe99df154992890a1e018e" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.925303 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc04d0c4c483edbd61ce63d1807980c2b168b325ccbe99df154992890a1e018e"} err="failed to get container status \"fc04d0c4c483edbd61ce63d1807980c2b168b325ccbe99df154992890a1e018e\": rpc error: code = NotFound desc = could not find container \"fc04d0c4c483edbd61ce63d1807980c2b168b325ccbe99df154992890a1e018e\": container with ID starting with fc04d0c4c483edbd61ce63d1807980c2b168b325ccbe99df154992890a1e018e not found: ID does not exist" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.925343 4689 scope.go:117] "RemoveContainer" containerID="a15dc911dbdd80f6cb6fd9a426552b0925c281f681f6ddddb554936219e07c3f" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.927069 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a15dc911dbdd80f6cb6fd9a426552b0925c281f681f6ddddb554936219e07c3f"} err="failed to get container status \"a15dc911dbdd80f6cb6fd9a426552b0925c281f681f6ddddb554936219e07c3f\": rpc error: code = NotFound desc = could not find container \"a15dc911dbdd80f6cb6fd9a426552b0925c281f681f6ddddb554936219e07c3f\": container with ID starting with a15dc911dbdd80f6cb6fd9a426552b0925c281f681f6ddddb554936219e07c3f not found: ID does not exist" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.927232 4689 scope.go:117] "RemoveContainer" containerID="fc04d0c4c483edbd61ce63d1807980c2b168b325ccbe99df154992890a1e018e" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.927741 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc04d0c4c483edbd61ce63d1807980c2b168b325ccbe99df154992890a1e018e"} err="failed to get container status \"fc04d0c4c483edbd61ce63d1807980c2b168b325ccbe99df154992890a1e018e\": rpc error: code = NotFound desc = could not find container \"fc04d0c4c483edbd61ce63d1807980c2b168b325ccbe99df154992890a1e018e\": container with ID starting with fc04d0c4c483edbd61ce63d1807980c2b168b325ccbe99df154992890a1e018e not found: ID does not exist" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.927850 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.940122 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 10 12:37:44 crc kubenswrapper[4689]: E1210 12:37:44.940826 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06cf7890-6267-472d-a29e-c766c8009eda" containerName="init" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.940929 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="06cf7890-6267-472d-a29e-c766c8009eda" containerName="init" Dec 10 12:37:44 crc kubenswrapper[4689]: E1210 12:37:44.941037 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c20f1556-95fd-4488-bb5b-5dda218a55bf" containerName="nova-manage" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.941142 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c20f1556-95fd-4488-bb5b-5dda218a55bf" containerName="nova-manage" Dec 10 12:37:44 crc kubenswrapper[4689]: E1210 12:37:44.941245 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06cf7890-6267-472d-a29e-c766c8009eda" containerName="dnsmasq-dns" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.941319 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="06cf7890-6267-472d-a29e-c766c8009eda" containerName="dnsmasq-dns" Dec 10 12:37:44 crc kubenswrapper[4689]: E1210 12:37:44.941407 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0746befd-89ee-4e07-84b6-cd21c72c4ce0" containerName="nova-metadata-log" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.941487 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="0746befd-89ee-4e07-84b6-cd21c72c4ce0" containerName="nova-metadata-log" Dec 10 12:37:44 crc kubenswrapper[4689]: E1210 12:37:44.941582 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0746befd-89ee-4e07-84b6-cd21c72c4ce0" containerName="nova-metadata-metadata" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.941659 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="0746befd-89ee-4e07-84b6-cd21c72c4ce0" containerName="nova-metadata-metadata" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.941945 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="0746befd-89ee-4e07-84b6-cd21c72c4ce0" containerName="nova-metadata-log" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.942084 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="06cf7890-6267-472d-a29e-c766c8009eda" containerName="dnsmasq-dns" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.942184 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="c20f1556-95fd-4488-bb5b-5dda218a55bf" containerName="nova-manage" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.942282 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="0746befd-89ee-4e07-84b6-cd21c72c4ce0" containerName="nova-metadata-metadata" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.943642 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.955207 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.955485 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 10 12:37:44 crc kubenswrapper[4689]: I1210 12:37:44.957795 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 12:37:45 crc kubenswrapper[4689]: I1210 12:37:45.027932 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjjmg\" (UniqueName: \"kubernetes.io/projected/ff93cb1f-aee9-49d9-b6fd-ca96103ff881-kube-api-access-pjjmg\") pod \"nova-metadata-0\" (UID: \"ff93cb1f-aee9-49d9-b6fd-ca96103ff881\") " pod="openstack/nova-metadata-0" Dec 10 12:37:45 crc kubenswrapper[4689]: I1210 12:37:45.028396 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93cb1f-aee9-49d9-b6fd-ca96103ff881-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ff93cb1f-aee9-49d9-b6fd-ca96103ff881\") " pod="openstack/nova-metadata-0" Dec 10 12:37:45 crc kubenswrapper[4689]: I1210 12:37:45.028622 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff93cb1f-aee9-49d9-b6fd-ca96103ff881-logs\") pod \"nova-metadata-0\" (UID: \"ff93cb1f-aee9-49d9-b6fd-ca96103ff881\") " pod="openstack/nova-metadata-0" Dec 10 12:37:45 crc kubenswrapper[4689]: I1210 12:37:45.028723 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff93cb1f-aee9-49d9-b6fd-ca96103ff881-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ff93cb1f-aee9-49d9-b6fd-ca96103ff881\") " pod="openstack/nova-metadata-0" Dec 10 12:37:45 crc kubenswrapper[4689]: I1210 12:37:45.029705 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff93cb1f-aee9-49d9-b6fd-ca96103ff881-config-data\") pod \"nova-metadata-0\" (UID: \"ff93cb1f-aee9-49d9-b6fd-ca96103ff881\") " pod="openstack/nova-metadata-0" Dec 10 12:37:45 crc kubenswrapper[4689]: I1210 12:37:45.131902 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjjmg\" (UniqueName: \"kubernetes.io/projected/ff93cb1f-aee9-49d9-b6fd-ca96103ff881-kube-api-access-pjjmg\") pod \"nova-metadata-0\" (UID: \"ff93cb1f-aee9-49d9-b6fd-ca96103ff881\") " pod="openstack/nova-metadata-0" Dec 10 12:37:45 crc kubenswrapper[4689]: I1210 12:37:45.131958 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93cb1f-aee9-49d9-b6fd-ca96103ff881-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ff93cb1f-aee9-49d9-b6fd-ca96103ff881\") " pod="openstack/nova-metadata-0" Dec 10 12:37:45 crc kubenswrapper[4689]: I1210 12:37:45.132046 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff93cb1f-aee9-49d9-b6fd-ca96103ff881-logs\") pod \"nova-metadata-0\" (UID: \"ff93cb1f-aee9-49d9-b6fd-ca96103ff881\") " pod="openstack/nova-metadata-0" Dec 10 12:37:45 crc kubenswrapper[4689]: I1210 12:37:45.132067 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff93cb1f-aee9-49d9-b6fd-ca96103ff881-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ff93cb1f-aee9-49d9-b6fd-ca96103ff881\") " pod="openstack/nova-metadata-0" Dec 10 12:37:45 crc kubenswrapper[4689]: I1210 12:37:45.132098 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff93cb1f-aee9-49d9-b6fd-ca96103ff881-config-data\") pod \"nova-metadata-0\" (UID: \"ff93cb1f-aee9-49d9-b6fd-ca96103ff881\") " pod="openstack/nova-metadata-0" Dec 10 12:37:45 crc kubenswrapper[4689]: I1210 12:37:45.133139 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff93cb1f-aee9-49d9-b6fd-ca96103ff881-logs\") pod \"nova-metadata-0\" (UID: \"ff93cb1f-aee9-49d9-b6fd-ca96103ff881\") " pod="openstack/nova-metadata-0" Dec 10 12:37:45 crc kubenswrapper[4689]: I1210 12:37:45.135948 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff93cb1f-aee9-49d9-b6fd-ca96103ff881-config-data\") pod \"nova-metadata-0\" (UID: \"ff93cb1f-aee9-49d9-b6fd-ca96103ff881\") " pod="openstack/nova-metadata-0" Dec 10 12:37:45 crc kubenswrapper[4689]: I1210 12:37:45.136445 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff93cb1f-aee9-49d9-b6fd-ca96103ff881-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ff93cb1f-aee9-49d9-b6fd-ca96103ff881\") " pod="openstack/nova-metadata-0" Dec 10 12:37:45 crc kubenswrapper[4689]: I1210 12:37:45.137642 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93cb1f-aee9-49d9-b6fd-ca96103ff881-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ff93cb1f-aee9-49d9-b6fd-ca96103ff881\") " pod="openstack/nova-metadata-0" Dec 10 12:37:45 crc kubenswrapper[4689]: I1210 12:37:45.148805 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjjmg\" (UniqueName: \"kubernetes.io/projected/ff93cb1f-aee9-49d9-b6fd-ca96103ff881-kube-api-access-pjjmg\") pod \"nova-metadata-0\" (UID: \"ff93cb1f-aee9-49d9-b6fd-ca96103ff881\") " pod="openstack/nova-metadata-0" Dec 10 12:37:45 crc kubenswrapper[4689]: I1210 12:37:45.280818 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 12:37:45 crc kubenswrapper[4689]: I1210 12:37:45.770239 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 12:37:45 crc kubenswrapper[4689]: W1210 12:37:45.770886 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff93cb1f_aee9_49d9_b6fd_ca96103ff881.slice/crio-90ded19233c2c3489fbcd5bc1ff0ee36a2812cadd73b51215b2fe451b80f2ab0 WatchSource:0}: Error finding container 90ded19233c2c3489fbcd5bc1ff0ee36a2812cadd73b51215b2fe451b80f2ab0: Status 404 returned error can't find the container with id 90ded19233c2c3489fbcd5bc1ff0ee36a2812cadd73b51215b2fe451b80f2ab0 Dec 10 12:37:45 crc kubenswrapper[4689]: I1210 12:37:45.893254 4689 generic.go:334] "Generic (PLEG): container finished" podID="c762e827-42f3-4b9e-ac55-e1d95d357278" containerID="455d7b101740e4c69059cf46cbbea57d58d6bd0bf59b3b4f6f147a3fa22e20d4" exitCode=0 Dec 10 12:37:45 crc kubenswrapper[4689]: I1210 12:37:45.893362 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c762e827-42f3-4b9e-ac55-e1d95d357278","Type":"ContainerDied","Data":"455d7b101740e4c69059cf46cbbea57d58d6bd0bf59b3b4f6f147a3fa22e20d4"} Dec 10 12:37:45 crc kubenswrapper[4689]: I1210 12:37:45.899099 4689 generic.go:334] "Generic (PLEG): container finished" podID="e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f" containerID="256f6d4b99b62b9d777833314af7c5e3614c45ffb55cf1bc7e37a33caccc041c" exitCode=137 Dec 10 12:37:45 crc kubenswrapper[4689]: I1210 12:37:45.899161 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f","Type":"ContainerDied","Data":"256f6d4b99b62b9d777833314af7c5e3614c45ffb55cf1bc7e37a33caccc041c"} Dec 10 12:37:45 crc kubenswrapper[4689]: I1210 12:37:45.899189 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f","Type":"ContainerDied","Data":"23aeaf687bc665c4e225b15fc2bd25a5004d5164bfb24efdedf16ce23344b970"} Dec 10 12:37:45 crc kubenswrapper[4689]: I1210 12:37:45.899200 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23aeaf687bc665c4e225b15fc2bd25a5004d5164bfb24efdedf16ce23344b970" Dec 10 12:37:45 crc kubenswrapper[4689]: I1210 12:37:45.901775 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff93cb1f-aee9-49d9-b6fd-ca96103ff881","Type":"ContainerStarted","Data":"90ded19233c2c3489fbcd5bc1ff0ee36a2812cadd73b51215b2fe451b80f2ab0"} Dec 10 12:37:45 crc kubenswrapper[4689]: I1210 12:37:45.981814 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.051660 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-log-httpd\") pod \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\" (UID: \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\") " Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.051722 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-run-httpd\") pod \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\" (UID: \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\") " Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.051787 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-config-data\") pod \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\" (UID: \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\") " Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.051847 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztvnw\" (UniqueName: \"kubernetes.io/projected/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-kube-api-access-ztvnw\") pod \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\" (UID: \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\") " Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.051917 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-scripts\") pod \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\" (UID: \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\") " Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.051938 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-sg-core-conf-yaml\") pod \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\" (UID: \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\") " Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.052107 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-combined-ca-bundle\") pod \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\" (UID: \"e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f\") " Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.053279 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f" (UID: "e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.053802 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f" (UID: "e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.058305 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-scripts" (OuterVolumeSpecName: "scripts") pod "e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f" (UID: "e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.066543 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-kube-api-access-ztvnw" (OuterVolumeSpecName: "kube-api-access-ztvnw") pod "e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f" (UID: "e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f"). InnerVolumeSpecName "kube-api-access-ztvnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.086778 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f" (UID: "e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.139312 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f" (UID: "e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.155013 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.155051 4689 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.155061 4689 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.155072 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztvnw\" (UniqueName: \"kubernetes.io/projected/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-kube-api-access-ztvnw\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.155086 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.155097 4689 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.183144 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-config-data" (OuterVolumeSpecName: "config-data") pod "e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f" (UID: "e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.228645 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.263513 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.364912 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c762e827-42f3-4b9e-ac55-e1d95d357278-combined-ca-bundle\") pod \"c762e827-42f3-4b9e-ac55-e1d95d357278\" (UID: \"c762e827-42f3-4b9e-ac55-e1d95d357278\") " Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.365283 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c762e827-42f3-4b9e-ac55-e1d95d357278-config-data\") pod \"c762e827-42f3-4b9e-ac55-e1d95d357278\" (UID: \"c762e827-42f3-4b9e-ac55-e1d95d357278\") " Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.365384 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57x7m\" (UniqueName: \"kubernetes.io/projected/c762e827-42f3-4b9e-ac55-e1d95d357278-kube-api-access-57x7m\") pod \"c762e827-42f3-4b9e-ac55-e1d95d357278\" (UID: \"c762e827-42f3-4b9e-ac55-e1d95d357278\") " Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.368788 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c762e827-42f3-4b9e-ac55-e1d95d357278-kube-api-access-57x7m" (OuterVolumeSpecName: "kube-api-access-57x7m") pod "c762e827-42f3-4b9e-ac55-e1d95d357278" (UID: "c762e827-42f3-4b9e-ac55-e1d95d357278"). InnerVolumeSpecName "kube-api-access-57x7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.388253 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c762e827-42f3-4b9e-ac55-e1d95d357278-config-data" (OuterVolumeSpecName: "config-data") pod "c762e827-42f3-4b9e-ac55-e1d95d357278" (UID: "c762e827-42f3-4b9e-ac55-e1d95d357278"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.389244 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c762e827-42f3-4b9e-ac55-e1d95d357278-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c762e827-42f3-4b9e-ac55-e1d95d357278" (UID: "c762e827-42f3-4b9e-ac55-e1d95d357278"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.467131 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57x7m\" (UniqueName: \"kubernetes.io/projected/c762e827-42f3-4b9e-ac55-e1d95d357278-kube-api-access-57x7m\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.467171 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c762e827-42f3-4b9e-ac55-e1d95d357278-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.467184 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c762e827-42f3-4b9e-ac55-e1d95d357278-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.512543 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0746befd-89ee-4e07-84b6-cd21c72c4ce0" path="/var/lib/kubelet/pods/0746befd-89ee-4e07-84b6-cd21c72c4ce0/volumes" Dec 10 12:37:46 crc kubenswrapper[4689]: E1210 12:37:46.799961 4689 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod099d539b_5f0e_41e2_b344_d7c10c52cf16.slice/crio-fcfb339e401c5ebefdaa0ff873d1eb3d984c1f97a58c80b5051aa52851379d98.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod099d539b_5f0e_41e2_b344_d7c10c52cf16.slice/crio-conmon-fcfb339e401c5ebefdaa0ff873d1eb3d984c1f97a58c80b5051aa52851379d98.scope\": RecentStats: unable to find data in memory cache]" Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.911428 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c762e827-42f3-4b9e-ac55-e1d95d357278","Type":"ContainerDied","Data":"ae4c4ebc5888814f2aec5b17b60c4038d0cc72bad7291652cfb5ca58850ba3a1"} Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.911462 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.911500 4689 scope.go:117] "RemoveContainer" containerID="455d7b101740e4c69059cf46cbbea57d58d6bd0bf59b3b4f6f147a3fa22e20d4" Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.916205 4689 generic.go:334] "Generic (PLEG): container finished" podID="099d539b-5f0e-41e2-b344-d7c10c52cf16" containerID="fcfb339e401c5ebefdaa0ff873d1eb3d984c1f97a58c80b5051aa52851379d98" exitCode=0 Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.916257 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-njfxw" event={"ID":"099d539b-5f0e-41e2-b344-d7c10c52cf16","Type":"ContainerDied","Data":"fcfb339e401c5ebefdaa0ff873d1eb3d984c1f97a58c80b5051aa52851379d98"} Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.919081 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.919126 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff93cb1f-aee9-49d9-b6fd-ca96103ff881","Type":"ContainerStarted","Data":"5967dad926ef9db71eff1c3a709a4feccf47bbc2bd7f90e880c11319d24d245c"} Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.919150 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff93cb1f-aee9-49d9-b6fd-ca96103ff881","Type":"ContainerStarted","Data":"40d3c5864a75d6a3716585681c27175b5751c35e23d6c1814072363e3f88f5a9"} Dec 10 12:37:46 crc kubenswrapper[4689]: I1210 12:37:46.964018 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.963996953 podStartE2EDuration="2.963996953s" podCreationTimestamp="2025-12-10 12:37:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:37:46.956292263 +0000 UTC m=+1334.744373431" watchObservedRunningTime="2025-12-10 12:37:46.963996953 +0000 UTC m=+1334.752078091" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.016050 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.038515 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.051054 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.062760 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.072266 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:37:47 crc kubenswrapper[4689]: E1210 12:37:47.073123 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f" containerName="ceilometer-central-agent" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.073150 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f" containerName="ceilometer-central-agent" Dec 10 12:37:47 crc kubenswrapper[4689]: E1210 12:37:47.073194 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f" containerName="sg-core" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.073216 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f" containerName="sg-core" Dec 10 12:37:47 crc kubenswrapper[4689]: E1210 12:37:47.073236 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c762e827-42f3-4b9e-ac55-e1d95d357278" containerName="nova-scheduler-scheduler" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.073246 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c762e827-42f3-4b9e-ac55-e1d95d357278" containerName="nova-scheduler-scheduler" Dec 10 12:37:47 crc kubenswrapper[4689]: E1210 12:37:47.073259 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f" containerName="proxy-httpd" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.073266 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f" containerName="proxy-httpd" Dec 10 12:37:47 crc kubenswrapper[4689]: E1210 12:37:47.073282 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f" containerName="ceilometer-notification-agent" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.073292 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f" containerName="ceilometer-notification-agent" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.073506 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f" containerName="ceilometer-central-agent" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.073525 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f" containerName="sg-core" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.073541 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f" containerName="proxy-httpd" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.073553 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f" containerName="ceilometer-notification-agent" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.073573 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="c762e827-42f3-4b9e-ac55-e1d95d357278" containerName="nova-scheduler-scheduler" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.075677 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.078047 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.078122 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.083109 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.100515 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.104628 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.109484 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.125728 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.222936 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8812cbb-3130-4658-abfb-69c933628fcb-run-httpd\") pod \"ceilometer-0\" (UID: \"d8812cbb-3130-4658-abfb-69c933628fcb\") " pod="openstack/ceilometer-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.223115 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drbw9\" (UniqueName: \"kubernetes.io/projected/434ae256-23eb-408f-8b81-1956600d3c2f-kube-api-access-drbw9\") pod \"nova-scheduler-0\" (UID: \"434ae256-23eb-408f-8b81-1956600d3c2f\") " pod="openstack/nova-scheduler-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.223154 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8812cbb-3130-4658-abfb-69c933628fcb-config-data\") pod \"ceilometer-0\" (UID: \"d8812cbb-3130-4658-abfb-69c933628fcb\") " pod="openstack/ceilometer-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.223184 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8812cbb-3130-4658-abfb-69c933628fcb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d8812cbb-3130-4658-abfb-69c933628fcb\") " pod="openstack/ceilometer-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.223259 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8812cbb-3130-4658-abfb-69c933628fcb-log-httpd\") pod \"ceilometer-0\" (UID: \"d8812cbb-3130-4658-abfb-69c933628fcb\") " pod="openstack/ceilometer-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.223288 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8812cbb-3130-4658-abfb-69c933628fcb-scripts\") pod \"ceilometer-0\" (UID: \"d8812cbb-3130-4658-abfb-69c933628fcb\") " pod="openstack/ceilometer-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.223360 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/434ae256-23eb-408f-8b81-1956600d3c2f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"434ae256-23eb-408f-8b81-1956600d3c2f\") " pod="openstack/nova-scheduler-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.223429 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8812cbb-3130-4658-abfb-69c933628fcb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d8812cbb-3130-4658-abfb-69c933628fcb\") " pod="openstack/ceilometer-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.223461 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5xkm\" (UniqueName: \"kubernetes.io/projected/d8812cbb-3130-4658-abfb-69c933628fcb-kube-api-access-z5xkm\") pod \"ceilometer-0\" (UID: \"d8812cbb-3130-4658-abfb-69c933628fcb\") " pod="openstack/ceilometer-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.223487 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/434ae256-23eb-408f-8b81-1956600d3c2f-config-data\") pod \"nova-scheduler-0\" (UID: \"434ae256-23eb-408f-8b81-1956600d3c2f\") " pod="openstack/nova-scheduler-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.325716 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5xkm\" (UniqueName: \"kubernetes.io/projected/d8812cbb-3130-4658-abfb-69c933628fcb-kube-api-access-z5xkm\") pod \"ceilometer-0\" (UID: \"d8812cbb-3130-4658-abfb-69c933628fcb\") " pod="openstack/ceilometer-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.325840 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/434ae256-23eb-408f-8b81-1956600d3c2f-config-data\") pod \"nova-scheduler-0\" (UID: \"434ae256-23eb-408f-8b81-1956600d3c2f\") " pod="openstack/nova-scheduler-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.325932 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8812cbb-3130-4658-abfb-69c933628fcb-run-httpd\") pod \"ceilometer-0\" (UID: \"d8812cbb-3130-4658-abfb-69c933628fcb\") " pod="openstack/ceilometer-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.325965 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drbw9\" (UniqueName: \"kubernetes.io/projected/434ae256-23eb-408f-8b81-1956600d3c2f-kube-api-access-drbw9\") pod \"nova-scheduler-0\" (UID: \"434ae256-23eb-408f-8b81-1956600d3c2f\") " pod="openstack/nova-scheduler-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.326011 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8812cbb-3130-4658-abfb-69c933628fcb-config-data\") pod \"ceilometer-0\" (UID: \"d8812cbb-3130-4658-abfb-69c933628fcb\") " pod="openstack/ceilometer-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.326040 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8812cbb-3130-4658-abfb-69c933628fcb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d8812cbb-3130-4658-abfb-69c933628fcb\") " pod="openstack/ceilometer-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.326084 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8812cbb-3130-4658-abfb-69c933628fcb-log-httpd\") pod \"ceilometer-0\" (UID: \"d8812cbb-3130-4658-abfb-69c933628fcb\") " pod="openstack/ceilometer-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.326106 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8812cbb-3130-4658-abfb-69c933628fcb-scripts\") pod \"ceilometer-0\" (UID: \"d8812cbb-3130-4658-abfb-69c933628fcb\") " pod="openstack/ceilometer-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.326161 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/434ae256-23eb-408f-8b81-1956600d3c2f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"434ae256-23eb-408f-8b81-1956600d3c2f\") " pod="openstack/nova-scheduler-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.326224 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8812cbb-3130-4658-abfb-69c933628fcb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d8812cbb-3130-4658-abfb-69c933628fcb\") " pod="openstack/ceilometer-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.327431 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8812cbb-3130-4658-abfb-69c933628fcb-log-httpd\") pod \"ceilometer-0\" (UID: \"d8812cbb-3130-4658-abfb-69c933628fcb\") " pod="openstack/ceilometer-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.327575 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8812cbb-3130-4658-abfb-69c933628fcb-run-httpd\") pod \"ceilometer-0\" (UID: \"d8812cbb-3130-4658-abfb-69c933628fcb\") " pod="openstack/ceilometer-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.332404 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/434ae256-23eb-408f-8b81-1956600d3c2f-config-data\") pod \"nova-scheduler-0\" (UID: \"434ae256-23eb-408f-8b81-1956600d3c2f\") " pod="openstack/nova-scheduler-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.332426 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/434ae256-23eb-408f-8b81-1956600d3c2f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"434ae256-23eb-408f-8b81-1956600d3c2f\") " pod="openstack/nova-scheduler-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.332661 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8812cbb-3130-4658-abfb-69c933628fcb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d8812cbb-3130-4658-abfb-69c933628fcb\") " pod="openstack/ceilometer-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.333150 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8812cbb-3130-4658-abfb-69c933628fcb-config-data\") pod \"ceilometer-0\" (UID: \"d8812cbb-3130-4658-abfb-69c933628fcb\") " pod="openstack/ceilometer-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.343182 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8812cbb-3130-4658-abfb-69c933628fcb-scripts\") pod \"ceilometer-0\" (UID: \"d8812cbb-3130-4658-abfb-69c933628fcb\") " pod="openstack/ceilometer-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.346305 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5xkm\" (UniqueName: \"kubernetes.io/projected/d8812cbb-3130-4658-abfb-69c933628fcb-kube-api-access-z5xkm\") pod \"ceilometer-0\" (UID: \"d8812cbb-3130-4658-abfb-69c933628fcb\") " pod="openstack/ceilometer-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.352290 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8812cbb-3130-4658-abfb-69c933628fcb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d8812cbb-3130-4658-abfb-69c933628fcb\") " pod="openstack/ceilometer-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.355052 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drbw9\" (UniqueName: \"kubernetes.io/projected/434ae256-23eb-408f-8b81-1956600d3c2f-kube-api-access-drbw9\") pod \"nova-scheduler-0\" (UID: \"434ae256-23eb-408f-8b81-1956600d3c2f\") " pod="openstack/nova-scheduler-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.402010 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.422287 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.936844 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 12:37:47 crc kubenswrapper[4689]: I1210 12:37:47.959677 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:37:47 crc kubenswrapper[4689]: W1210 12:37:47.964152 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8812cbb_3130_4658_abfb_69c933628fcb.slice/crio-d58bf762feae387aefdf17ae36d97f7b9a4374e348ce0f4382b7860c38b600b8 WatchSource:0}: Error finding container d58bf762feae387aefdf17ae36d97f7b9a4374e348ce0f4382b7860c38b600b8: Status 404 returned error can't find the container with id d58bf762feae387aefdf17ae36d97f7b9a4374e348ce0f4382b7860c38b600b8 Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.340228 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-njfxw" Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.480851 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/099d539b-5f0e-41e2-b344-d7c10c52cf16-combined-ca-bundle\") pod \"099d539b-5f0e-41e2-b344-d7c10c52cf16\" (UID: \"099d539b-5f0e-41e2-b344-d7c10c52cf16\") " Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.480953 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/099d539b-5f0e-41e2-b344-d7c10c52cf16-scripts\") pod \"099d539b-5f0e-41e2-b344-d7c10c52cf16\" (UID: \"099d539b-5f0e-41e2-b344-d7c10c52cf16\") " Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.481045 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlzvt\" (UniqueName: \"kubernetes.io/projected/099d539b-5f0e-41e2-b344-d7c10c52cf16-kube-api-access-mlzvt\") pod \"099d539b-5f0e-41e2-b344-d7c10c52cf16\" (UID: \"099d539b-5f0e-41e2-b344-d7c10c52cf16\") " Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.481105 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/099d539b-5f0e-41e2-b344-d7c10c52cf16-config-data\") pod \"099d539b-5f0e-41e2-b344-d7c10c52cf16\" (UID: \"099d539b-5f0e-41e2-b344-d7c10c52cf16\") " Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.485535 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/099d539b-5f0e-41e2-b344-d7c10c52cf16-scripts" (OuterVolumeSpecName: "scripts") pod "099d539b-5f0e-41e2-b344-d7c10c52cf16" (UID: "099d539b-5f0e-41e2-b344-d7c10c52cf16"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.488147 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/099d539b-5f0e-41e2-b344-d7c10c52cf16-kube-api-access-mlzvt" (OuterVolumeSpecName: "kube-api-access-mlzvt") pod "099d539b-5f0e-41e2-b344-d7c10c52cf16" (UID: "099d539b-5f0e-41e2-b344-d7c10c52cf16"). InnerVolumeSpecName "kube-api-access-mlzvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.521267 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c762e827-42f3-4b9e-ac55-e1d95d357278" path="/var/lib/kubelet/pods/c762e827-42f3-4b9e-ac55-e1d95d357278/volumes" Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.521779 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/099d539b-5f0e-41e2-b344-d7c10c52cf16-config-data" (OuterVolumeSpecName: "config-data") pod "099d539b-5f0e-41e2-b344-d7c10c52cf16" (UID: "099d539b-5f0e-41e2-b344-d7c10c52cf16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.522255 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f" path="/var/lib/kubelet/pods/e4cdde9e-99ce-4f9d-8699-0c1a1cd84d9f/volumes" Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.527619 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/099d539b-5f0e-41e2-b344-d7c10c52cf16-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "099d539b-5f0e-41e2-b344-d7c10c52cf16" (UID: "099d539b-5f0e-41e2-b344-d7c10c52cf16"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.584292 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/099d539b-5f0e-41e2-b344-d7c10c52cf16-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.584331 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlzvt\" (UniqueName: \"kubernetes.io/projected/099d539b-5f0e-41e2-b344-d7c10c52cf16-kube-api-access-mlzvt\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.584341 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/099d539b-5f0e-41e2-b344-d7c10c52cf16-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.584350 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/099d539b-5f0e-41e2-b344-d7c10c52cf16-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.741258 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.888663 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qvzj\" (UniqueName: \"kubernetes.io/projected/5f2f7795-fa0a-475a-987d-75bd7b505e1a-kube-api-access-2qvzj\") pod \"5f2f7795-fa0a-475a-987d-75bd7b505e1a\" (UID: \"5f2f7795-fa0a-475a-987d-75bd7b505e1a\") " Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.888943 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f2f7795-fa0a-475a-987d-75bd7b505e1a-logs\") pod \"5f2f7795-fa0a-475a-987d-75bd7b505e1a\" (UID: \"5f2f7795-fa0a-475a-987d-75bd7b505e1a\") " Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.889057 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f2f7795-fa0a-475a-987d-75bd7b505e1a-config-data\") pod \"5f2f7795-fa0a-475a-987d-75bd7b505e1a\" (UID: \"5f2f7795-fa0a-475a-987d-75bd7b505e1a\") " Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.889198 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f2f7795-fa0a-475a-987d-75bd7b505e1a-combined-ca-bundle\") pod \"5f2f7795-fa0a-475a-987d-75bd7b505e1a\" (UID: \"5f2f7795-fa0a-475a-987d-75bd7b505e1a\") " Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.889442 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f2f7795-fa0a-475a-987d-75bd7b505e1a-logs" (OuterVolumeSpecName: "logs") pod "5f2f7795-fa0a-475a-987d-75bd7b505e1a" (UID: "5f2f7795-fa0a-475a-987d-75bd7b505e1a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.889806 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f2f7795-fa0a-475a-987d-75bd7b505e1a-logs\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.892885 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f2f7795-fa0a-475a-987d-75bd7b505e1a-kube-api-access-2qvzj" (OuterVolumeSpecName: "kube-api-access-2qvzj") pod "5f2f7795-fa0a-475a-987d-75bd7b505e1a" (UID: "5f2f7795-fa0a-475a-987d-75bd7b505e1a"). InnerVolumeSpecName "kube-api-access-2qvzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.914684 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f2f7795-fa0a-475a-987d-75bd7b505e1a-config-data" (OuterVolumeSpecName: "config-data") pod "5f2f7795-fa0a-475a-987d-75bd7b505e1a" (UID: "5f2f7795-fa0a-475a-987d-75bd7b505e1a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.915121 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f2f7795-fa0a-475a-987d-75bd7b505e1a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f2f7795-fa0a-475a-987d-75bd7b505e1a" (UID: "5f2f7795-fa0a-475a-987d-75bd7b505e1a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.945415 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8812cbb-3130-4658-abfb-69c933628fcb","Type":"ContainerStarted","Data":"d58bf762feae387aefdf17ae36d97f7b9a4374e348ce0f4382b7860c38b600b8"} Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.949457 4689 generic.go:334] "Generic (PLEG): container finished" podID="5f2f7795-fa0a-475a-987d-75bd7b505e1a" containerID="cf2db1fd406acedad348a136d8a1eb4ce593fab381e6c4fe5fa04a044256c4d0" exitCode=0 Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.949533 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5f2f7795-fa0a-475a-987d-75bd7b505e1a","Type":"ContainerDied","Data":"cf2db1fd406acedad348a136d8a1eb4ce593fab381e6c4fe5fa04a044256c4d0"} Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.949561 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5f2f7795-fa0a-475a-987d-75bd7b505e1a","Type":"ContainerDied","Data":"6fd8adac71de778762d0502e9ebf0e0407a759fc028ee13baa8696f4f0d45ce8"} Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.949577 4689 scope.go:117] "RemoveContainer" containerID="cf2db1fd406acedad348a136d8a1eb4ce593fab381e6c4fe5fa04a044256c4d0" Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.949737 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.962497 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-njfxw" event={"ID":"099d539b-5f0e-41e2-b344-d7c10c52cf16","Type":"ContainerDied","Data":"640cc5f8a9047f9fcd7c599c9fa2f7810406e5cc9ec5403498c77246d2a769e2"} Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.962543 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="640cc5f8a9047f9fcd7c599c9fa2f7810406e5cc9ec5403498c77246d2a769e2" Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.962538 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-njfxw" Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.976452 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"434ae256-23eb-408f-8b81-1956600d3c2f","Type":"ContainerStarted","Data":"c3accbd3e51c14c887c513588c356294ea81bde9df9cc45cfaec617749468b90"} Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.976500 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"434ae256-23eb-408f-8b81-1956600d3c2f","Type":"ContainerStarted","Data":"a39e8c2e4650c114c1a2c1e50a52b07e36707f907cd82d823f5995d50bc8c733"} Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.991686 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f2f7795-fa0a-475a-987d-75bd7b505e1a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.991720 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qvzj\" (UniqueName: \"kubernetes.io/projected/5f2f7795-fa0a-475a-987d-75bd7b505e1a-kube-api-access-2qvzj\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:48 crc kubenswrapper[4689]: I1210 12:37:48.991737 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f2f7795-fa0a-475a-987d-75bd7b505e1a-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.026204 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.026178163 podStartE2EDuration="2.026178163s" podCreationTimestamp="2025-12-10 12:37:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:37:48.998410997 +0000 UTC m=+1336.786492135" watchObservedRunningTime="2025-12-10 12:37:49.026178163 +0000 UTC m=+1336.814259291" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.110327 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 10 12:37:49 crc kubenswrapper[4689]: E1210 12:37:49.110777 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f2f7795-fa0a-475a-987d-75bd7b505e1a" containerName="nova-api-log" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.110791 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f2f7795-fa0a-475a-987d-75bd7b505e1a" containerName="nova-api-log" Dec 10 12:37:49 crc kubenswrapper[4689]: E1210 12:37:49.110812 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="099d539b-5f0e-41e2-b344-d7c10c52cf16" containerName="nova-cell1-conductor-db-sync" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.110820 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="099d539b-5f0e-41e2-b344-d7c10c52cf16" containerName="nova-cell1-conductor-db-sync" Dec 10 12:37:49 crc kubenswrapper[4689]: E1210 12:37:49.110847 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f2f7795-fa0a-475a-987d-75bd7b505e1a" containerName="nova-api-api" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.110854 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f2f7795-fa0a-475a-987d-75bd7b505e1a" containerName="nova-api-api" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.117426 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f2f7795-fa0a-475a-987d-75bd7b505e1a" containerName="nova-api-log" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.117497 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="099d539b-5f0e-41e2-b344-d7c10c52cf16" containerName="nova-cell1-conductor-db-sync" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.117532 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f2f7795-fa0a-475a-987d-75bd7b505e1a" containerName="nova-api-api" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.118373 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.122333 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.137869 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.196234 4689 scope.go:117] "RemoveContainer" containerID="57373cd126f446f94568e7e4bfa44564163fa9799f7197eb1a500aee6dbe91b9" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.264098 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.290891 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.301231 4689 scope.go:117] "RemoveContainer" containerID="cf2db1fd406acedad348a136d8a1eb4ce593fab381e6c4fe5fa04a044256c4d0" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.302195 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab709567-2151-49a7-b199-aeca8ee0ae19-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ab709567-2151-49a7-b199-aeca8ee0ae19\") " pod="openstack/nova-cell1-conductor-0" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.302293 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab709567-2151-49a7-b199-aeca8ee0ae19-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ab709567-2151-49a7-b199-aeca8ee0ae19\") " pod="openstack/nova-cell1-conductor-0" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.302329 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcjbw\" (UniqueName: \"kubernetes.io/projected/ab709567-2151-49a7-b199-aeca8ee0ae19-kube-api-access-vcjbw\") pod \"nova-cell1-conductor-0\" (UID: \"ab709567-2151-49a7-b199-aeca8ee0ae19\") " pod="openstack/nova-cell1-conductor-0" Dec 10 12:37:49 crc kubenswrapper[4689]: E1210 12:37:49.306083 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf2db1fd406acedad348a136d8a1eb4ce593fab381e6c4fe5fa04a044256c4d0\": container with ID starting with cf2db1fd406acedad348a136d8a1eb4ce593fab381e6c4fe5fa04a044256c4d0 not found: ID does not exist" containerID="cf2db1fd406acedad348a136d8a1eb4ce593fab381e6c4fe5fa04a044256c4d0" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.306128 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf2db1fd406acedad348a136d8a1eb4ce593fab381e6c4fe5fa04a044256c4d0"} err="failed to get container status \"cf2db1fd406acedad348a136d8a1eb4ce593fab381e6c4fe5fa04a044256c4d0\": rpc error: code = NotFound desc = could not find container \"cf2db1fd406acedad348a136d8a1eb4ce593fab381e6c4fe5fa04a044256c4d0\": container with ID starting with cf2db1fd406acedad348a136d8a1eb4ce593fab381e6c4fe5fa04a044256c4d0 not found: ID does not exist" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.306153 4689 scope.go:117] "RemoveContainer" containerID="57373cd126f446f94568e7e4bfa44564163fa9799f7197eb1a500aee6dbe91b9" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.308629 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.310190 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 12:37:49 crc kubenswrapper[4689]: E1210 12:37:49.312113 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57373cd126f446f94568e7e4bfa44564163fa9799f7197eb1a500aee6dbe91b9\": container with ID starting with 57373cd126f446f94568e7e4bfa44564163fa9799f7197eb1a500aee6dbe91b9 not found: ID does not exist" containerID="57373cd126f446f94568e7e4bfa44564163fa9799f7197eb1a500aee6dbe91b9" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.312142 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57373cd126f446f94568e7e4bfa44564163fa9799f7197eb1a500aee6dbe91b9"} err="failed to get container status \"57373cd126f446f94568e7e4bfa44564163fa9799f7197eb1a500aee6dbe91b9\": rpc error: code = NotFound desc = could not find container \"57373cd126f446f94568e7e4bfa44564163fa9799f7197eb1a500aee6dbe91b9\": container with ID starting with 57373cd126f446f94568e7e4bfa44564163fa9799f7197eb1a500aee6dbe91b9 not found: ID does not exist" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.313919 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.339432 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.404616 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef390315-58be-4db5-9456-eb2232ea12e4-logs\") pod \"nova-api-0\" (UID: \"ef390315-58be-4db5-9456-eb2232ea12e4\") " pod="openstack/nova-api-0" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.404669 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcjbw\" (UniqueName: \"kubernetes.io/projected/ab709567-2151-49a7-b199-aeca8ee0ae19-kube-api-access-vcjbw\") pod \"nova-cell1-conductor-0\" (UID: \"ab709567-2151-49a7-b199-aeca8ee0ae19\") " pod="openstack/nova-cell1-conductor-0" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.404726 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef390315-58be-4db5-9456-eb2232ea12e4-config-data\") pod \"nova-api-0\" (UID: \"ef390315-58be-4db5-9456-eb2232ea12e4\") " pod="openstack/nova-api-0" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.404754 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef390315-58be-4db5-9456-eb2232ea12e4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ef390315-58be-4db5-9456-eb2232ea12e4\") " pod="openstack/nova-api-0" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.404824 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd9xp\" (UniqueName: \"kubernetes.io/projected/ef390315-58be-4db5-9456-eb2232ea12e4-kube-api-access-vd9xp\") pod \"nova-api-0\" (UID: \"ef390315-58be-4db5-9456-eb2232ea12e4\") " pod="openstack/nova-api-0" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.404843 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab709567-2151-49a7-b199-aeca8ee0ae19-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ab709567-2151-49a7-b199-aeca8ee0ae19\") " pod="openstack/nova-cell1-conductor-0" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.404938 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab709567-2151-49a7-b199-aeca8ee0ae19-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ab709567-2151-49a7-b199-aeca8ee0ae19\") " pod="openstack/nova-cell1-conductor-0" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.410550 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab709567-2151-49a7-b199-aeca8ee0ae19-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ab709567-2151-49a7-b199-aeca8ee0ae19\") " pod="openstack/nova-cell1-conductor-0" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.416677 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab709567-2151-49a7-b199-aeca8ee0ae19-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ab709567-2151-49a7-b199-aeca8ee0ae19\") " pod="openstack/nova-cell1-conductor-0" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.460808 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcjbw\" (UniqueName: \"kubernetes.io/projected/ab709567-2151-49a7-b199-aeca8ee0ae19-kube-api-access-vcjbw\") pod \"nova-cell1-conductor-0\" (UID: \"ab709567-2151-49a7-b199-aeca8ee0ae19\") " pod="openstack/nova-cell1-conductor-0" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.506778 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef390315-58be-4db5-9456-eb2232ea12e4-logs\") pod \"nova-api-0\" (UID: \"ef390315-58be-4db5-9456-eb2232ea12e4\") " pod="openstack/nova-api-0" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.506892 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef390315-58be-4db5-9456-eb2232ea12e4-config-data\") pod \"nova-api-0\" (UID: \"ef390315-58be-4db5-9456-eb2232ea12e4\") " pod="openstack/nova-api-0" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.506927 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef390315-58be-4db5-9456-eb2232ea12e4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ef390315-58be-4db5-9456-eb2232ea12e4\") " pod="openstack/nova-api-0" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.507031 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd9xp\" (UniqueName: \"kubernetes.io/projected/ef390315-58be-4db5-9456-eb2232ea12e4-kube-api-access-vd9xp\") pod \"nova-api-0\" (UID: \"ef390315-58be-4db5-9456-eb2232ea12e4\") " pod="openstack/nova-api-0" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.508027 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef390315-58be-4db5-9456-eb2232ea12e4-logs\") pod \"nova-api-0\" (UID: \"ef390315-58be-4db5-9456-eb2232ea12e4\") " pod="openstack/nova-api-0" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.512658 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef390315-58be-4db5-9456-eb2232ea12e4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ef390315-58be-4db5-9456-eb2232ea12e4\") " pod="openstack/nova-api-0" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.523466 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef390315-58be-4db5-9456-eb2232ea12e4-config-data\") pod \"nova-api-0\" (UID: \"ef390315-58be-4db5-9456-eb2232ea12e4\") " pod="openstack/nova-api-0" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.528077 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd9xp\" (UniqueName: \"kubernetes.io/projected/ef390315-58be-4db5-9456-eb2232ea12e4-kube-api-access-vd9xp\") pod \"nova-api-0\" (UID: \"ef390315-58be-4db5-9456-eb2232ea12e4\") " pod="openstack/nova-api-0" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.538813 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.632340 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 12:37:49 crc kubenswrapper[4689]: I1210 12:37:49.989310 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8812cbb-3130-4658-abfb-69c933628fcb","Type":"ContainerStarted","Data":"5802c8c89ae5de342645beff196a48f8f345075b7eca9db4cfbd8925132aac52"} Dec 10 12:37:50 crc kubenswrapper[4689]: I1210 12:37:50.021032 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 10 12:37:50 crc kubenswrapper[4689]: I1210 12:37:50.180554 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 10 12:37:50 crc kubenswrapper[4689]: W1210 12:37:50.187252 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef390315_58be_4db5_9456_eb2232ea12e4.slice/crio-c96379ab7bda715b2f64b49ab456679dccc74f7c5691ceefd3d6f7730ef5d6d2 WatchSource:0}: Error finding container c96379ab7bda715b2f64b49ab456679dccc74f7c5691ceefd3d6f7730ef5d6d2: Status 404 returned error can't find the container with id c96379ab7bda715b2f64b49ab456679dccc74f7c5691ceefd3d6f7730ef5d6d2 Dec 10 12:37:50 crc kubenswrapper[4689]: I1210 12:37:50.281635 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 10 12:37:50 crc kubenswrapper[4689]: I1210 12:37:50.281688 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 10 12:37:50 crc kubenswrapper[4689]: I1210 12:37:50.366407 4689 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 12:37:50 crc kubenswrapper[4689]: I1210 12:37:50.508693 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f2f7795-fa0a-475a-987d-75bd7b505e1a" path="/var/lib/kubelet/pods/5f2f7795-fa0a-475a-987d-75bd7b505e1a/volumes" Dec 10 12:37:51 crc kubenswrapper[4689]: I1210 12:37:51.001703 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ab709567-2151-49a7-b199-aeca8ee0ae19","Type":"ContainerStarted","Data":"ae4867538f9e988f1b116355a48581fee166183118427612d723bc59839b8891"} Dec 10 12:37:51 crc kubenswrapper[4689]: I1210 12:37:51.001752 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ab709567-2151-49a7-b199-aeca8ee0ae19","Type":"ContainerStarted","Data":"f5444117d0f605485022084813b87186b1aaad3d05085693d56ece8354fc8f3b"} Dec 10 12:37:51 crc kubenswrapper[4689]: I1210 12:37:51.001767 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 10 12:37:51 crc kubenswrapper[4689]: I1210 12:37:51.003779 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8812cbb-3130-4658-abfb-69c933628fcb","Type":"ContainerStarted","Data":"67ada391e320f77e0349bd3941ede24f59de0bae6d91c721a62f74c74979ee09"} Dec 10 12:37:51 crc kubenswrapper[4689]: I1210 12:37:51.009552 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ef390315-58be-4db5-9456-eb2232ea12e4","Type":"ContainerStarted","Data":"230b36c100e96fb5b46bd18433c8cb6330ed4fe19e753b08c14dac2862f493cb"} Dec 10 12:37:51 crc kubenswrapper[4689]: I1210 12:37:51.009601 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ef390315-58be-4db5-9456-eb2232ea12e4","Type":"ContainerStarted","Data":"154e9b9e5afe0a3f004c4d0a3a9993661d60dfe1395cddaf672b92044f9cfd02"} Dec 10 12:37:51 crc kubenswrapper[4689]: I1210 12:37:51.009613 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ef390315-58be-4db5-9456-eb2232ea12e4","Type":"ContainerStarted","Data":"c96379ab7bda715b2f64b49ab456679dccc74f7c5691ceefd3d6f7730ef5d6d2"} Dec 10 12:37:51 crc kubenswrapper[4689]: I1210 12:37:51.025439 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.025415889 podStartE2EDuration="2.025415889s" podCreationTimestamp="2025-12-10 12:37:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:37:51.014929399 +0000 UTC m=+1338.803010537" watchObservedRunningTime="2025-12-10 12:37:51.025415889 +0000 UTC m=+1338.813497047" Dec 10 12:37:51 crc kubenswrapper[4689]: I1210 12:37:51.045950 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.045929065 podStartE2EDuration="2.045929065s" podCreationTimestamp="2025-12-10 12:37:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:37:51.045559216 +0000 UTC m=+1338.833640374" watchObservedRunningTime="2025-12-10 12:37:51.045929065 +0000 UTC m=+1338.834010203" Dec 10 12:37:52 crc kubenswrapper[4689]: I1210 12:37:52.422729 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 10 12:37:53 crc kubenswrapper[4689]: I1210 12:37:53.071501 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8812cbb-3130-4658-abfb-69c933628fcb","Type":"ContainerStarted","Data":"e695add69db303f7430b2b76ab9ac30c5c9f71348ccca74921fe5eea92bfc1d8"} Dec 10 12:37:55 crc kubenswrapper[4689]: I1210 12:37:55.097433 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8812cbb-3130-4658-abfb-69c933628fcb","Type":"ContainerStarted","Data":"a254cdd06bba57100734dd081e9abce1acfe304c2f19bda7820c7f9aaa194232"} Dec 10 12:37:55 crc kubenswrapper[4689]: I1210 12:37:55.099352 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 10 12:37:55 crc kubenswrapper[4689]: I1210 12:37:55.128766 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.250567257 podStartE2EDuration="8.128744169s" podCreationTimestamp="2025-12-10 12:37:47 +0000 UTC" firstStartedPulling="2025-12-10 12:37:47.973549326 +0000 UTC m=+1335.761630464" lastFinishedPulling="2025-12-10 12:37:53.851726218 +0000 UTC m=+1341.639807376" observedRunningTime="2025-12-10 12:37:55.123762347 +0000 UTC m=+1342.911843485" watchObservedRunningTime="2025-12-10 12:37:55.128744169 +0000 UTC m=+1342.916825307" Dec 10 12:37:55 crc kubenswrapper[4689]: I1210 12:37:55.281038 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 10 12:37:55 crc kubenswrapper[4689]: I1210 12:37:55.281101 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 10 12:37:56 crc kubenswrapper[4689]: I1210 12:37:56.294134 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ff93cb1f-aee9-49d9-b6fd-ca96103ff881" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 12:37:56 crc kubenswrapper[4689]: I1210 12:37:56.294183 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ff93cb1f-aee9-49d9-b6fd-ca96103ff881" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 12:37:57 crc kubenswrapper[4689]: I1210 12:37:57.423353 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 10 12:37:57 crc kubenswrapper[4689]: I1210 12:37:57.453510 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 10 12:37:58 crc kubenswrapper[4689]: I1210 12:37:58.165892 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 10 12:37:59 crc kubenswrapper[4689]: I1210 12:37:59.573140 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 10 12:37:59 crc kubenswrapper[4689]: I1210 12:37:59.633966 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 10 12:37:59 crc kubenswrapper[4689]: I1210 12:37:59.634027 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 10 12:38:00 crc kubenswrapper[4689]: I1210 12:38:00.716384 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ef390315-58be-4db5-9456-eb2232ea12e4" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.201:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 12:38:00 crc kubenswrapper[4689]: I1210 12:38:00.716335 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ef390315-58be-4db5-9456-eb2232ea12e4" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.201:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 12:38:05 crc kubenswrapper[4689]: I1210 12:38:05.294337 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 10 12:38:05 crc kubenswrapper[4689]: I1210 12:38:05.296063 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 10 12:38:05 crc kubenswrapper[4689]: I1210 12:38:05.303294 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 10 12:38:06 crc kubenswrapper[4689]: I1210 12:38:06.230657 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 10 12:38:07 crc kubenswrapper[4689]: I1210 12:38:07.166929 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 12:38:07 crc kubenswrapper[4689]: I1210 12:38:07.167227 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 12:38:07 crc kubenswrapper[4689]: I1210 12:38:07.167266 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" Dec 10 12:38:07 crc kubenswrapper[4689]: I1210 12:38:07.167850 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c7f561a90578da7003727a817e3d47870ce7681703176f0fc2d8b1240ef4ff23"} pod="openshift-machine-config-operator/machine-config-daemon-db6zk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 12:38:07 crc kubenswrapper[4689]: I1210 12:38:07.167900 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" containerID="cri-o://c7f561a90578da7003727a817e3d47870ce7681703176f0fc2d8b1240ef4ff23" gracePeriod=600 Dec 10 12:38:07 crc kubenswrapper[4689]: E1210 12:38:07.418983 4689 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda41ebdcd_910f_4669_992d_296e1a92162b.slice/crio-conmon-c7f561a90578da7003727a817e3d47870ce7681703176f0fc2d8b1240ef4ff23.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda41ebdcd_910f_4669_992d_296e1a92162b.slice/crio-c7f561a90578da7003727a817e3d47870ce7681703176f0fc2d8b1240ef4ff23.scope\": RecentStats: unable to find data in memory cache]" Dec 10 12:38:08 crc kubenswrapper[4689]: I1210 12:38:08.249512 4689 generic.go:334] "Generic (PLEG): container finished" podID="a41ebdcd-910f-4669-992d-296e1a92162b" containerID="c7f561a90578da7003727a817e3d47870ce7681703176f0fc2d8b1240ef4ff23" exitCode=0 Dec 10 12:38:08 crc kubenswrapper[4689]: I1210 12:38:08.249635 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" event={"ID":"a41ebdcd-910f-4669-992d-296e1a92162b","Type":"ContainerDied","Data":"c7f561a90578da7003727a817e3d47870ce7681703176f0fc2d8b1240ef4ff23"} Dec 10 12:38:08 crc kubenswrapper[4689]: I1210 12:38:08.250218 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" event={"ID":"a41ebdcd-910f-4669-992d-296e1a92162b","Type":"ContainerStarted","Data":"12c9249e05008e8640dad38e14c40fbe87bf771f5c2f12128bca16301c4173e9"} Dec 10 12:38:08 crc kubenswrapper[4689]: I1210 12:38:08.250250 4689 scope.go:117] "RemoveContainer" containerID="bf7bbf9875a5b9cc37e5a62ace29b6dd6e4de888067fb82c65a8956ea2149bad" Dec 10 12:38:09 crc kubenswrapper[4689]: I1210 12:38:09.638272 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 10 12:38:09 crc kubenswrapper[4689]: I1210 12:38:09.638704 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 10 12:38:09 crc kubenswrapper[4689]: I1210 12:38:09.639138 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 10 12:38:09 crc kubenswrapper[4689]: I1210 12:38:09.639151 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 10 12:38:09 crc kubenswrapper[4689]: I1210 12:38:09.645434 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 10 12:38:09 crc kubenswrapper[4689]: I1210 12:38:09.646484 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 10 12:38:09 crc kubenswrapper[4689]: I1210 12:38:09.861170 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-m75sl"] Dec 10 12:38:09 crc kubenswrapper[4689]: I1210 12:38:09.872360 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-m75sl" Dec 10 12:38:09 crc kubenswrapper[4689]: I1210 12:38:09.889110 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-m75sl"] Dec 10 12:38:09 crc kubenswrapper[4689]: I1210 12:38:09.957595 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1683950-c036-44c9-9ad3-5e91fee6c3ba-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-m75sl\" (UID: \"e1683950-c036-44c9-9ad3-5e91fee6c3ba\") " pod="openstack/dnsmasq-dns-59cf4bdb65-m75sl" Dec 10 12:38:09 crc kubenswrapper[4689]: I1210 12:38:09.957637 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1683950-c036-44c9-9ad3-5e91fee6c3ba-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-m75sl\" (UID: \"e1683950-c036-44c9-9ad3-5e91fee6c3ba\") " pod="openstack/dnsmasq-dns-59cf4bdb65-m75sl" Dec 10 12:38:09 crc kubenswrapper[4689]: I1210 12:38:09.957783 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1683950-c036-44c9-9ad3-5e91fee6c3ba-config\") pod \"dnsmasq-dns-59cf4bdb65-m75sl\" (UID: \"e1683950-c036-44c9-9ad3-5e91fee6c3ba\") " pod="openstack/dnsmasq-dns-59cf4bdb65-m75sl" Dec 10 12:38:09 crc kubenswrapper[4689]: I1210 12:38:09.957814 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1683950-c036-44c9-9ad3-5e91fee6c3ba-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-m75sl\" (UID: \"e1683950-c036-44c9-9ad3-5e91fee6c3ba\") " pod="openstack/dnsmasq-dns-59cf4bdb65-m75sl" Dec 10 12:38:09 crc kubenswrapper[4689]: I1210 12:38:09.957856 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tttm7\" (UniqueName: \"kubernetes.io/projected/e1683950-c036-44c9-9ad3-5e91fee6c3ba-kube-api-access-tttm7\") pod \"dnsmasq-dns-59cf4bdb65-m75sl\" (UID: \"e1683950-c036-44c9-9ad3-5e91fee6c3ba\") " pod="openstack/dnsmasq-dns-59cf4bdb65-m75sl" Dec 10 12:38:09 crc kubenswrapper[4689]: I1210 12:38:09.957896 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1683950-c036-44c9-9ad3-5e91fee6c3ba-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-m75sl\" (UID: \"e1683950-c036-44c9-9ad3-5e91fee6c3ba\") " pod="openstack/dnsmasq-dns-59cf4bdb65-m75sl" Dec 10 12:38:10 crc kubenswrapper[4689]: I1210 12:38:10.061311 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1683950-c036-44c9-9ad3-5e91fee6c3ba-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-m75sl\" (UID: \"e1683950-c036-44c9-9ad3-5e91fee6c3ba\") " pod="openstack/dnsmasq-dns-59cf4bdb65-m75sl" Dec 10 12:38:10 crc kubenswrapper[4689]: I1210 12:38:10.061360 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1683950-c036-44c9-9ad3-5e91fee6c3ba-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-m75sl\" (UID: \"e1683950-c036-44c9-9ad3-5e91fee6c3ba\") " pod="openstack/dnsmasq-dns-59cf4bdb65-m75sl" Dec 10 12:38:10 crc kubenswrapper[4689]: I1210 12:38:10.061470 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1683950-c036-44c9-9ad3-5e91fee6c3ba-config\") pod \"dnsmasq-dns-59cf4bdb65-m75sl\" (UID: \"e1683950-c036-44c9-9ad3-5e91fee6c3ba\") " pod="openstack/dnsmasq-dns-59cf4bdb65-m75sl" Dec 10 12:38:10 crc kubenswrapper[4689]: I1210 12:38:10.061494 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1683950-c036-44c9-9ad3-5e91fee6c3ba-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-m75sl\" (UID: \"e1683950-c036-44c9-9ad3-5e91fee6c3ba\") " pod="openstack/dnsmasq-dns-59cf4bdb65-m75sl" Dec 10 12:38:10 crc kubenswrapper[4689]: I1210 12:38:10.061555 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tttm7\" (UniqueName: \"kubernetes.io/projected/e1683950-c036-44c9-9ad3-5e91fee6c3ba-kube-api-access-tttm7\") pod \"dnsmasq-dns-59cf4bdb65-m75sl\" (UID: \"e1683950-c036-44c9-9ad3-5e91fee6c3ba\") " pod="openstack/dnsmasq-dns-59cf4bdb65-m75sl" Dec 10 12:38:10 crc kubenswrapper[4689]: I1210 12:38:10.061600 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1683950-c036-44c9-9ad3-5e91fee6c3ba-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-m75sl\" (UID: \"e1683950-c036-44c9-9ad3-5e91fee6c3ba\") " pod="openstack/dnsmasq-dns-59cf4bdb65-m75sl" Dec 10 12:38:10 crc kubenswrapper[4689]: I1210 12:38:10.065025 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1683950-c036-44c9-9ad3-5e91fee6c3ba-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-m75sl\" (UID: \"e1683950-c036-44c9-9ad3-5e91fee6c3ba\") " pod="openstack/dnsmasq-dns-59cf4bdb65-m75sl" Dec 10 12:38:10 crc kubenswrapper[4689]: I1210 12:38:10.065026 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1683950-c036-44c9-9ad3-5e91fee6c3ba-config\") pod \"dnsmasq-dns-59cf4bdb65-m75sl\" (UID: \"e1683950-c036-44c9-9ad3-5e91fee6c3ba\") " pod="openstack/dnsmasq-dns-59cf4bdb65-m75sl" Dec 10 12:38:10 crc kubenswrapper[4689]: I1210 12:38:10.065686 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1683950-c036-44c9-9ad3-5e91fee6c3ba-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-m75sl\" (UID: \"e1683950-c036-44c9-9ad3-5e91fee6c3ba\") " pod="openstack/dnsmasq-dns-59cf4bdb65-m75sl" Dec 10 12:38:10 crc kubenswrapper[4689]: I1210 12:38:10.065839 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1683950-c036-44c9-9ad3-5e91fee6c3ba-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-m75sl\" (UID: \"e1683950-c036-44c9-9ad3-5e91fee6c3ba\") " pod="openstack/dnsmasq-dns-59cf4bdb65-m75sl" Dec 10 12:38:10 crc kubenswrapper[4689]: I1210 12:38:10.066094 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1683950-c036-44c9-9ad3-5e91fee6c3ba-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-m75sl\" (UID: \"e1683950-c036-44c9-9ad3-5e91fee6c3ba\") " pod="openstack/dnsmasq-dns-59cf4bdb65-m75sl" Dec 10 12:38:10 crc kubenswrapper[4689]: I1210 12:38:10.107750 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tttm7\" (UniqueName: \"kubernetes.io/projected/e1683950-c036-44c9-9ad3-5e91fee6c3ba-kube-api-access-tttm7\") pod \"dnsmasq-dns-59cf4bdb65-m75sl\" (UID: \"e1683950-c036-44c9-9ad3-5e91fee6c3ba\") " pod="openstack/dnsmasq-dns-59cf4bdb65-m75sl" Dec 10 12:38:10 crc kubenswrapper[4689]: I1210 12:38:10.207421 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-m75sl" Dec 10 12:38:10 crc kubenswrapper[4689]: I1210 12:38:10.274795 4689 generic.go:334] "Generic (PLEG): container finished" podID="9fd9caf2-87f6-4732-aa65-32d2515071cc" containerID="aa7c015778ba76a63c34de3212fc4df0c0bb586b4ba0f8643d420f70cfed0594" exitCode=137 Dec 10 12:38:10 crc kubenswrapper[4689]: I1210 12:38:10.274892 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9fd9caf2-87f6-4732-aa65-32d2515071cc","Type":"ContainerDied","Data":"aa7c015778ba76a63c34de3212fc4df0c0bb586b4ba0f8643d420f70cfed0594"} Dec 10 12:38:10 crc kubenswrapper[4689]: I1210 12:38:10.749227 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-m75sl"] Dec 10 12:38:10 crc kubenswrapper[4689]: I1210 12:38:10.787728 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:38:10 crc kubenswrapper[4689]: I1210 12:38:10.875388 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fd9caf2-87f6-4732-aa65-32d2515071cc-combined-ca-bundle\") pod \"9fd9caf2-87f6-4732-aa65-32d2515071cc\" (UID: \"9fd9caf2-87f6-4732-aa65-32d2515071cc\") " Dec 10 12:38:10 crc kubenswrapper[4689]: I1210 12:38:10.875575 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fd9caf2-87f6-4732-aa65-32d2515071cc-config-data\") pod \"9fd9caf2-87f6-4732-aa65-32d2515071cc\" (UID: \"9fd9caf2-87f6-4732-aa65-32d2515071cc\") " Dec 10 12:38:10 crc kubenswrapper[4689]: I1210 12:38:10.875692 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxd4k\" (UniqueName: \"kubernetes.io/projected/9fd9caf2-87f6-4732-aa65-32d2515071cc-kube-api-access-nxd4k\") pod \"9fd9caf2-87f6-4732-aa65-32d2515071cc\" (UID: \"9fd9caf2-87f6-4732-aa65-32d2515071cc\") " Dec 10 12:38:10 crc kubenswrapper[4689]: I1210 12:38:10.884346 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fd9caf2-87f6-4732-aa65-32d2515071cc-kube-api-access-nxd4k" (OuterVolumeSpecName: "kube-api-access-nxd4k") pod "9fd9caf2-87f6-4732-aa65-32d2515071cc" (UID: "9fd9caf2-87f6-4732-aa65-32d2515071cc"). InnerVolumeSpecName "kube-api-access-nxd4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:38:10 crc kubenswrapper[4689]: I1210 12:38:10.916707 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fd9caf2-87f6-4732-aa65-32d2515071cc-config-data" (OuterVolumeSpecName: "config-data") pod "9fd9caf2-87f6-4732-aa65-32d2515071cc" (UID: "9fd9caf2-87f6-4732-aa65-32d2515071cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:38:10 crc kubenswrapper[4689]: I1210 12:38:10.923226 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fd9caf2-87f6-4732-aa65-32d2515071cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9fd9caf2-87f6-4732-aa65-32d2515071cc" (UID: "9fd9caf2-87f6-4732-aa65-32d2515071cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:38:10 crc kubenswrapper[4689]: I1210 12:38:10.978184 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fd9caf2-87f6-4732-aa65-32d2515071cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:10 crc kubenswrapper[4689]: I1210 12:38:10.978218 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fd9caf2-87f6-4732-aa65-32d2515071cc-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:10 crc kubenswrapper[4689]: I1210 12:38:10.978228 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxd4k\" (UniqueName: \"kubernetes.io/projected/9fd9caf2-87f6-4732-aa65-32d2515071cc-kube-api-access-nxd4k\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:11 crc kubenswrapper[4689]: I1210 12:38:11.299251 4689 generic.go:334] "Generic (PLEG): container finished" podID="e1683950-c036-44c9-9ad3-5e91fee6c3ba" containerID="64527e4bfd50289fe923b93807f1970ddbcafed5b39e88bc9225a0b61f8ffd92" exitCode=0 Dec 10 12:38:11 crc kubenswrapper[4689]: I1210 12:38:11.299478 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-m75sl" event={"ID":"e1683950-c036-44c9-9ad3-5e91fee6c3ba","Type":"ContainerDied","Data":"64527e4bfd50289fe923b93807f1970ddbcafed5b39e88bc9225a0b61f8ffd92"} Dec 10 12:38:11 crc kubenswrapper[4689]: I1210 12:38:11.299529 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-m75sl" event={"ID":"e1683950-c036-44c9-9ad3-5e91fee6c3ba","Type":"ContainerStarted","Data":"e79a16906fa9e002142f60e9a163839a5ae9009baa977ae5b311be4da16fbae5"} Dec 10 12:38:11 crc kubenswrapper[4689]: I1210 12:38:11.302138 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:38:11 crc kubenswrapper[4689]: I1210 12:38:11.302209 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9fd9caf2-87f6-4732-aa65-32d2515071cc","Type":"ContainerDied","Data":"bb265df8fa95c90ee36291e3f6e42584ed024d1b271726ddf3afc2879d78468d"} Dec 10 12:38:11 crc kubenswrapper[4689]: I1210 12:38:11.302276 4689 scope.go:117] "RemoveContainer" containerID="aa7c015778ba76a63c34de3212fc4df0c0bb586b4ba0f8643d420f70cfed0594" Dec 10 12:38:11 crc kubenswrapper[4689]: I1210 12:38:11.525981 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 12:38:11 crc kubenswrapper[4689]: I1210 12:38:11.536422 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 12:38:11 crc kubenswrapper[4689]: I1210 12:38:11.555668 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 12:38:11 crc kubenswrapper[4689]: E1210 12:38:11.556092 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fd9caf2-87f6-4732-aa65-32d2515071cc" containerName="nova-cell1-novncproxy-novncproxy" Dec 10 12:38:11 crc kubenswrapper[4689]: I1210 12:38:11.556106 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fd9caf2-87f6-4732-aa65-32d2515071cc" containerName="nova-cell1-novncproxy-novncproxy" Dec 10 12:38:11 crc kubenswrapper[4689]: I1210 12:38:11.556590 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fd9caf2-87f6-4732-aa65-32d2515071cc" containerName="nova-cell1-novncproxy-novncproxy" Dec 10 12:38:11 crc kubenswrapper[4689]: I1210 12:38:11.557256 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:38:11 crc kubenswrapper[4689]: I1210 12:38:11.561094 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 10 12:38:11 crc kubenswrapper[4689]: I1210 12:38:11.567548 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 10 12:38:11 crc kubenswrapper[4689]: I1210 12:38:11.567723 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 10 12:38:11 crc kubenswrapper[4689]: I1210 12:38:11.570492 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 12:38:11 crc kubenswrapper[4689]: I1210 12:38:11.589848 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86k6f\" (UniqueName: \"kubernetes.io/projected/c9d9a204-d8fb-4bb8-b864-14178e550382-kube-api-access-86k6f\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9d9a204-d8fb-4bb8-b864-14178e550382\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:38:11 crc kubenswrapper[4689]: I1210 12:38:11.589980 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9d9a204-d8fb-4bb8-b864-14178e550382-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9d9a204-d8fb-4bb8-b864-14178e550382\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:38:11 crc kubenswrapper[4689]: I1210 12:38:11.590033 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9d9a204-d8fb-4bb8-b864-14178e550382-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9d9a204-d8fb-4bb8-b864-14178e550382\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:38:11 crc kubenswrapper[4689]: I1210 12:38:11.590066 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9d9a204-d8fb-4bb8-b864-14178e550382-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9d9a204-d8fb-4bb8-b864-14178e550382\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:38:11 crc kubenswrapper[4689]: I1210 12:38:11.590116 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9d9a204-d8fb-4bb8-b864-14178e550382-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9d9a204-d8fb-4bb8-b864-14178e550382\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:38:11 crc kubenswrapper[4689]: I1210 12:38:11.692154 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86k6f\" (UniqueName: \"kubernetes.io/projected/c9d9a204-d8fb-4bb8-b864-14178e550382-kube-api-access-86k6f\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9d9a204-d8fb-4bb8-b864-14178e550382\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:38:11 crc kubenswrapper[4689]: I1210 12:38:11.692628 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9d9a204-d8fb-4bb8-b864-14178e550382-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9d9a204-d8fb-4bb8-b864-14178e550382\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:38:11 crc kubenswrapper[4689]: I1210 12:38:11.692684 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9d9a204-d8fb-4bb8-b864-14178e550382-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9d9a204-d8fb-4bb8-b864-14178e550382\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:38:11 crc kubenswrapper[4689]: I1210 12:38:11.692720 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9d9a204-d8fb-4bb8-b864-14178e550382-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9d9a204-d8fb-4bb8-b864-14178e550382\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:38:11 crc kubenswrapper[4689]: I1210 12:38:11.692773 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9d9a204-d8fb-4bb8-b864-14178e550382-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9d9a204-d8fb-4bb8-b864-14178e550382\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:38:11 crc kubenswrapper[4689]: I1210 12:38:11.701142 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9d9a204-d8fb-4bb8-b864-14178e550382-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9d9a204-d8fb-4bb8-b864-14178e550382\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:38:11 crc kubenswrapper[4689]: I1210 12:38:11.703932 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9d9a204-d8fb-4bb8-b864-14178e550382-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9d9a204-d8fb-4bb8-b864-14178e550382\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:38:11 crc kubenswrapper[4689]: I1210 12:38:11.709256 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86k6f\" (UniqueName: \"kubernetes.io/projected/c9d9a204-d8fb-4bb8-b864-14178e550382-kube-api-access-86k6f\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9d9a204-d8fb-4bb8-b864-14178e550382\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:38:11 crc kubenswrapper[4689]: I1210 12:38:11.715384 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9d9a204-d8fb-4bb8-b864-14178e550382-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9d9a204-d8fb-4bb8-b864-14178e550382\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:38:11 crc kubenswrapper[4689]: I1210 12:38:11.718670 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9d9a204-d8fb-4bb8-b864-14178e550382-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9d9a204-d8fb-4bb8-b864-14178e550382\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:38:11 crc kubenswrapper[4689]: I1210 12:38:11.877670 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:38:12 crc kubenswrapper[4689]: I1210 12:38:12.197640 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:38:12 crc kubenswrapper[4689]: I1210 12:38:12.198349 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d8812cbb-3130-4658-abfb-69c933628fcb" containerName="ceilometer-central-agent" containerID="cri-o://5802c8c89ae5de342645beff196a48f8f345075b7eca9db4cfbd8925132aac52" gracePeriod=30 Dec 10 12:38:12 crc kubenswrapper[4689]: I1210 12:38:12.198817 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d8812cbb-3130-4658-abfb-69c933628fcb" containerName="sg-core" containerID="cri-o://e695add69db303f7430b2b76ab9ac30c5c9f71348ccca74921fe5eea92bfc1d8" gracePeriod=30 Dec 10 12:38:12 crc kubenswrapper[4689]: I1210 12:38:12.198842 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d8812cbb-3130-4658-abfb-69c933628fcb" containerName="ceilometer-notification-agent" containerID="cri-o://67ada391e320f77e0349bd3941ede24f59de0bae6d91c721a62f74c74979ee09" gracePeriod=30 Dec 10 12:38:12 crc kubenswrapper[4689]: I1210 12:38:12.198857 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d8812cbb-3130-4658-abfb-69c933628fcb" containerName="proxy-httpd" containerID="cri-o://a254cdd06bba57100734dd081e9abce1acfe304c2f19bda7820c7f9aaa194232" gracePeriod=30 Dec 10 12:38:12 crc kubenswrapper[4689]: I1210 12:38:12.302348 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d8812cbb-3130-4658-abfb-69c933628fcb" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.198:3000/\": read tcp 10.217.0.2:49926->10.217.0.198:3000: read: connection reset by peer" Dec 10 12:38:12 crc kubenswrapper[4689]: I1210 12:38:12.328195 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-m75sl" event={"ID":"e1683950-c036-44c9-9ad3-5e91fee6c3ba","Type":"ContainerStarted","Data":"ae30f69ec682be3324cf4fa68090ed369318dd67ba5066dfd11cdcbef50286e6"} Dec 10 12:38:12 crc kubenswrapper[4689]: I1210 12:38:12.328343 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59cf4bdb65-m75sl" Dec 10 12:38:12 crc kubenswrapper[4689]: I1210 12:38:12.349395 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59cf4bdb65-m75sl" podStartSLOduration=3.349373702 podStartE2EDuration="3.349373702s" podCreationTimestamp="2025-12-10 12:38:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:38:12.344273115 +0000 UTC m=+1360.132354253" watchObservedRunningTime="2025-12-10 12:38:12.349373702 +0000 UTC m=+1360.137454840" Dec 10 12:38:12 crc kubenswrapper[4689]: W1210 12:38:12.379473 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9d9a204_d8fb_4bb8_b864_14178e550382.slice/crio-790239bb9c702360c0ad52f33a2064783d1507cbf9e76dc5ce8089a8f5495aa8 WatchSource:0}: Error finding container 790239bb9c702360c0ad52f33a2064783d1507cbf9e76dc5ce8089a8f5495aa8: Status 404 returned error can't find the container with id 790239bb9c702360c0ad52f33a2064783d1507cbf9e76dc5ce8089a8f5495aa8 Dec 10 12:38:12 crc kubenswrapper[4689]: I1210 12:38:12.379772 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 12:38:12 crc kubenswrapper[4689]: I1210 12:38:12.521704 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fd9caf2-87f6-4732-aa65-32d2515071cc" path="/var/lib/kubelet/pods/9fd9caf2-87f6-4732-aa65-32d2515071cc/volumes" Dec 10 12:38:12 crc kubenswrapper[4689]: I1210 12:38:12.633669 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 10 12:38:12 crc kubenswrapper[4689]: I1210 12:38:12.633930 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ef390315-58be-4db5-9456-eb2232ea12e4" containerName="nova-api-log" containerID="cri-o://154e9b9e5afe0a3f004c4d0a3a9993661d60dfe1395cddaf672b92044f9cfd02" gracePeriod=30 Dec 10 12:38:12 crc kubenswrapper[4689]: I1210 12:38:12.634045 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ef390315-58be-4db5-9456-eb2232ea12e4" containerName="nova-api-api" containerID="cri-o://230b36c100e96fb5b46bd18433c8cb6330ed4fe19e753b08c14dac2862f493cb" gracePeriod=30 Dec 10 12:38:13 crc kubenswrapper[4689]: I1210 12:38:13.339483 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c9d9a204-d8fb-4bb8-b864-14178e550382","Type":"ContainerStarted","Data":"1f1321b43d371b3d179de536efef80ada00a9c13c6224af7769b2b286d2da07f"} Dec 10 12:38:13 crc kubenswrapper[4689]: I1210 12:38:13.340257 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c9d9a204-d8fb-4bb8-b864-14178e550382","Type":"ContainerStarted","Data":"790239bb9c702360c0ad52f33a2064783d1507cbf9e76dc5ce8089a8f5495aa8"} Dec 10 12:38:13 crc kubenswrapper[4689]: I1210 12:38:13.342635 4689 generic.go:334] "Generic (PLEG): container finished" podID="d8812cbb-3130-4658-abfb-69c933628fcb" containerID="a254cdd06bba57100734dd081e9abce1acfe304c2f19bda7820c7f9aaa194232" exitCode=0 Dec 10 12:38:13 crc kubenswrapper[4689]: I1210 12:38:13.342732 4689 generic.go:334] "Generic (PLEG): container finished" podID="d8812cbb-3130-4658-abfb-69c933628fcb" containerID="e695add69db303f7430b2b76ab9ac30c5c9f71348ccca74921fe5eea92bfc1d8" exitCode=2 Dec 10 12:38:13 crc kubenswrapper[4689]: I1210 12:38:13.342672 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8812cbb-3130-4658-abfb-69c933628fcb","Type":"ContainerDied","Data":"a254cdd06bba57100734dd081e9abce1acfe304c2f19bda7820c7f9aaa194232"} Dec 10 12:38:13 crc kubenswrapper[4689]: I1210 12:38:13.342838 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8812cbb-3130-4658-abfb-69c933628fcb","Type":"ContainerDied","Data":"e695add69db303f7430b2b76ab9ac30c5c9f71348ccca74921fe5eea92bfc1d8"} Dec 10 12:38:13 crc kubenswrapper[4689]: I1210 12:38:13.342851 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8812cbb-3130-4658-abfb-69c933628fcb","Type":"ContainerDied","Data":"5802c8c89ae5de342645beff196a48f8f345075b7eca9db4cfbd8925132aac52"} Dec 10 12:38:13 crc kubenswrapper[4689]: I1210 12:38:13.342798 4689 generic.go:334] "Generic (PLEG): container finished" podID="d8812cbb-3130-4658-abfb-69c933628fcb" containerID="5802c8c89ae5de342645beff196a48f8f345075b7eca9db4cfbd8925132aac52" exitCode=0 Dec 10 12:38:13 crc kubenswrapper[4689]: I1210 12:38:13.347045 4689 generic.go:334] "Generic (PLEG): container finished" podID="ef390315-58be-4db5-9456-eb2232ea12e4" containerID="154e9b9e5afe0a3f004c4d0a3a9993661d60dfe1395cddaf672b92044f9cfd02" exitCode=143 Dec 10 12:38:13 crc kubenswrapper[4689]: I1210 12:38:13.348152 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ef390315-58be-4db5-9456-eb2232ea12e4","Type":"ContainerDied","Data":"154e9b9e5afe0a3f004c4d0a3a9993661d60dfe1395cddaf672b92044f9cfd02"} Dec 10 12:38:13 crc kubenswrapper[4689]: I1210 12:38:13.362803 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.3627786889999998 podStartE2EDuration="2.362778689s" podCreationTimestamp="2025-12-10 12:38:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:38:13.35754205 +0000 UTC m=+1361.145623188" watchObservedRunningTime="2025-12-10 12:38:13.362778689 +0000 UTC m=+1361.150859847" Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.396537 4689 generic.go:334] "Generic (PLEG): container finished" podID="ef390315-58be-4db5-9456-eb2232ea12e4" containerID="230b36c100e96fb5b46bd18433c8cb6330ed4fe19e753b08c14dac2862f493cb" exitCode=0 Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.396653 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ef390315-58be-4db5-9456-eb2232ea12e4","Type":"ContainerDied","Data":"230b36c100e96fb5b46bd18433c8cb6330ed4fe19e753b08c14dac2862f493cb"} Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.550887 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.616443 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef390315-58be-4db5-9456-eb2232ea12e4-logs\") pod \"ef390315-58be-4db5-9456-eb2232ea12e4\" (UID: \"ef390315-58be-4db5-9456-eb2232ea12e4\") " Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.616504 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd9xp\" (UniqueName: \"kubernetes.io/projected/ef390315-58be-4db5-9456-eb2232ea12e4-kube-api-access-vd9xp\") pod \"ef390315-58be-4db5-9456-eb2232ea12e4\" (UID: \"ef390315-58be-4db5-9456-eb2232ea12e4\") " Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.616655 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef390315-58be-4db5-9456-eb2232ea12e4-config-data\") pod \"ef390315-58be-4db5-9456-eb2232ea12e4\" (UID: \"ef390315-58be-4db5-9456-eb2232ea12e4\") " Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.616680 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef390315-58be-4db5-9456-eb2232ea12e4-combined-ca-bundle\") pod \"ef390315-58be-4db5-9456-eb2232ea12e4\" (UID: \"ef390315-58be-4db5-9456-eb2232ea12e4\") " Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.617091 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef390315-58be-4db5-9456-eb2232ea12e4-logs" (OuterVolumeSpecName: "logs") pod "ef390315-58be-4db5-9456-eb2232ea12e4" (UID: "ef390315-58be-4db5-9456-eb2232ea12e4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.617489 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef390315-58be-4db5-9456-eb2232ea12e4-logs\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.633832 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef390315-58be-4db5-9456-eb2232ea12e4-kube-api-access-vd9xp" (OuterVolumeSpecName: "kube-api-access-vd9xp") pod "ef390315-58be-4db5-9456-eb2232ea12e4" (UID: "ef390315-58be-4db5-9456-eb2232ea12e4"). InnerVolumeSpecName "kube-api-access-vd9xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.647999 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef390315-58be-4db5-9456-eb2232ea12e4-config-data" (OuterVolumeSpecName: "config-data") pod "ef390315-58be-4db5-9456-eb2232ea12e4" (UID: "ef390315-58be-4db5-9456-eb2232ea12e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.658143 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef390315-58be-4db5-9456-eb2232ea12e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef390315-58be-4db5-9456-eb2232ea12e4" (UID: "ef390315-58be-4db5-9456-eb2232ea12e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.719917 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef390315-58be-4db5-9456-eb2232ea12e4-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.719960 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef390315-58be-4db5-9456-eb2232ea12e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.720003 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd9xp\" (UniqueName: \"kubernetes.io/projected/ef390315-58be-4db5-9456-eb2232ea12e4-kube-api-access-vd9xp\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.780858 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.821252 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8812cbb-3130-4658-abfb-69c933628fcb-combined-ca-bundle\") pod \"d8812cbb-3130-4658-abfb-69c933628fcb\" (UID: \"d8812cbb-3130-4658-abfb-69c933628fcb\") " Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.821327 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8812cbb-3130-4658-abfb-69c933628fcb-config-data\") pod \"d8812cbb-3130-4658-abfb-69c933628fcb\" (UID: \"d8812cbb-3130-4658-abfb-69c933628fcb\") " Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.821354 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8812cbb-3130-4658-abfb-69c933628fcb-scripts\") pod \"d8812cbb-3130-4658-abfb-69c933628fcb\" (UID: \"d8812cbb-3130-4658-abfb-69c933628fcb\") " Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.821472 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8812cbb-3130-4658-abfb-69c933628fcb-run-httpd\") pod \"d8812cbb-3130-4658-abfb-69c933628fcb\" (UID: \"d8812cbb-3130-4658-abfb-69c933628fcb\") " Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.821526 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8812cbb-3130-4658-abfb-69c933628fcb-log-httpd\") pod \"d8812cbb-3130-4658-abfb-69c933628fcb\" (UID: \"d8812cbb-3130-4658-abfb-69c933628fcb\") " Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.821570 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8812cbb-3130-4658-abfb-69c933628fcb-sg-core-conf-yaml\") pod \"d8812cbb-3130-4658-abfb-69c933628fcb\" (UID: \"d8812cbb-3130-4658-abfb-69c933628fcb\") " Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.821594 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5xkm\" (UniqueName: \"kubernetes.io/projected/d8812cbb-3130-4658-abfb-69c933628fcb-kube-api-access-z5xkm\") pod \"d8812cbb-3130-4658-abfb-69c933628fcb\" (UID: \"d8812cbb-3130-4658-abfb-69c933628fcb\") " Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.823162 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8812cbb-3130-4658-abfb-69c933628fcb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d8812cbb-3130-4658-abfb-69c933628fcb" (UID: "d8812cbb-3130-4658-abfb-69c933628fcb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.823262 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8812cbb-3130-4658-abfb-69c933628fcb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d8812cbb-3130-4658-abfb-69c933628fcb" (UID: "d8812cbb-3130-4658-abfb-69c933628fcb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.827190 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8812cbb-3130-4658-abfb-69c933628fcb-kube-api-access-z5xkm" (OuterVolumeSpecName: "kube-api-access-z5xkm") pod "d8812cbb-3130-4658-abfb-69c933628fcb" (UID: "d8812cbb-3130-4658-abfb-69c933628fcb"). InnerVolumeSpecName "kube-api-access-z5xkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.828590 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8812cbb-3130-4658-abfb-69c933628fcb-scripts" (OuterVolumeSpecName: "scripts") pod "d8812cbb-3130-4658-abfb-69c933628fcb" (UID: "d8812cbb-3130-4658-abfb-69c933628fcb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.853235 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8812cbb-3130-4658-abfb-69c933628fcb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d8812cbb-3130-4658-abfb-69c933628fcb" (UID: "d8812cbb-3130-4658-abfb-69c933628fcb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.878563 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.905586 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8812cbb-3130-4658-abfb-69c933628fcb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8812cbb-3130-4658-abfb-69c933628fcb" (UID: "d8812cbb-3130-4658-abfb-69c933628fcb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.924212 4689 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8812cbb-3130-4658-abfb-69c933628fcb-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.924255 4689 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8812cbb-3130-4658-abfb-69c933628fcb-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.924269 4689 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8812cbb-3130-4658-abfb-69c933628fcb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.924283 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5xkm\" (UniqueName: \"kubernetes.io/projected/d8812cbb-3130-4658-abfb-69c933628fcb-kube-api-access-z5xkm\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.924297 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8812cbb-3130-4658-abfb-69c933628fcb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.924309 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8812cbb-3130-4658-abfb-69c933628fcb-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:16 crc kubenswrapper[4689]: I1210 12:38:16.939715 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8812cbb-3130-4658-abfb-69c933628fcb-config-data" (OuterVolumeSpecName: "config-data") pod "d8812cbb-3130-4658-abfb-69c933628fcb" (UID: "d8812cbb-3130-4658-abfb-69c933628fcb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.025795 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8812cbb-3130-4658-abfb-69c933628fcb-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.408661 4689 generic.go:334] "Generic (PLEG): container finished" podID="d8812cbb-3130-4658-abfb-69c933628fcb" containerID="67ada391e320f77e0349bd3941ede24f59de0bae6d91c721a62f74c74979ee09" exitCode=0 Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.408720 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.408724 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8812cbb-3130-4658-abfb-69c933628fcb","Type":"ContainerDied","Data":"67ada391e320f77e0349bd3941ede24f59de0bae6d91c721a62f74c74979ee09"} Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.408793 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8812cbb-3130-4658-abfb-69c933628fcb","Type":"ContainerDied","Data":"d58bf762feae387aefdf17ae36d97f7b9a4374e348ce0f4382b7860c38b600b8"} Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.408826 4689 scope.go:117] "RemoveContainer" containerID="a254cdd06bba57100734dd081e9abce1acfe304c2f19bda7820c7f9aaa194232" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.412167 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ef390315-58be-4db5-9456-eb2232ea12e4","Type":"ContainerDied","Data":"c96379ab7bda715b2f64b49ab456679dccc74f7c5691ceefd3d6f7730ef5d6d2"} Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.412280 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.456956 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.466267 4689 scope.go:117] "RemoveContainer" containerID="e695add69db303f7430b2b76ab9ac30c5c9f71348ccca74921fe5eea92bfc1d8" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.480814 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.501290 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.517157 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:38:17 crc kubenswrapper[4689]: E1210 12:38:17.517627 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8812cbb-3130-4658-abfb-69c933628fcb" containerName="ceilometer-central-agent" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.517643 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8812cbb-3130-4658-abfb-69c933628fcb" containerName="ceilometer-central-agent" Dec 10 12:38:17 crc kubenswrapper[4689]: E1210 12:38:17.517663 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8812cbb-3130-4658-abfb-69c933628fcb" containerName="ceilometer-notification-agent" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.517669 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8812cbb-3130-4658-abfb-69c933628fcb" containerName="ceilometer-notification-agent" Dec 10 12:38:17 crc kubenswrapper[4689]: E1210 12:38:17.517682 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8812cbb-3130-4658-abfb-69c933628fcb" containerName="proxy-httpd" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.517689 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8812cbb-3130-4658-abfb-69c933628fcb" containerName="proxy-httpd" Dec 10 12:38:17 crc kubenswrapper[4689]: E1210 12:38:17.517702 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8812cbb-3130-4658-abfb-69c933628fcb" containerName="sg-core" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.517708 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8812cbb-3130-4658-abfb-69c933628fcb" containerName="sg-core" Dec 10 12:38:17 crc kubenswrapper[4689]: E1210 12:38:17.517724 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef390315-58be-4db5-9456-eb2232ea12e4" containerName="nova-api-log" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.517730 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef390315-58be-4db5-9456-eb2232ea12e4" containerName="nova-api-log" Dec 10 12:38:17 crc kubenswrapper[4689]: E1210 12:38:17.517736 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef390315-58be-4db5-9456-eb2232ea12e4" containerName="nova-api-api" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.517742 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef390315-58be-4db5-9456-eb2232ea12e4" containerName="nova-api-api" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.517958 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8812cbb-3130-4658-abfb-69c933628fcb" containerName="proxy-httpd" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.517985 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8812cbb-3130-4658-abfb-69c933628fcb" containerName="sg-core" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.517993 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef390315-58be-4db5-9456-eb2232ea12e4" containerName="nova-api-api" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.518005 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8812cbb-3130-4658-abfb-69c933628fcb" containerName="ceilometer-central-agent" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.518017 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef390315-58be-4db5-9456-eb2232ea12e4" containerName="nova-api-log" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.518028 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8812cbb-3130-4658-abfb-69c933628fcb" containerName="ceilometer-notification-agent" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.520146 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.527277 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.527519 4689 scope.go:117] "RemoveContainer" containerID="67ada391e320f77e0349bd3941ede24f59de0bae6d91c721a62f74c74979ee09" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.527628 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.529990 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.547669 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.573393 4689 scope.go:117] "RemoveContainer" containerID="5802c8c89ae5de342645beff196a48f8f345075b7eca9db4cfbd8925132aac52" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.575652 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.577257 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.596115 4689 scope.go:117] "RemoveContainer" containerID="a254cdd06bba57100734dd081e9abce1acfe304c2f19bda7820c7f9aaa194232" Dec 10 12:38:17 crc kubenswrapper[4689]: E1210 12:38:17.597192 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a254cdd06bba57100734dd081e9abce1acfe304c2f19bda7820c7f9aaa194232\": container with ID starting with a254cdd06bba57100734dd081e9abce1acfe304c2f19bda7820c7f9aaa194232 not found: ID does not exist" containerID="a254cdd06bba57100734dd081e9abce1acfe304c2f19bda7820c7f9aaa194232" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.597340 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a254cdd06bba57100734dd081e9abce1acfe304c2f19bda7820c7f9aaa194232"} err="failed to get container status \"a254cdd06bba57100734dd081e9abce1acfe304c2f19bda7820c7f9aaa194232\": rpc error: code = NotFound desc = could not find container \"a254cdd06bba57100734dd081e9abce1acfe304c2f19bda7820c7f9aaa194232\": container with ID starting with a254cdd06bba57100734dd081e9abce1acfe304c2f19bda7820c7f9aaa194232 not found: ID does not exist" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.597365 4689 scope.go:117] "RemoveContainer" containerID="e695add69db303f7430b2b76ab9ac30c5c9f71348ccca74921fe5eea92bfc1d8" Dec 10 12:38:17 crc kubenswrapper[4689]: E1210 12:38:17.597817 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e695add69db303f7430b2b76ab9ac30c5c9f71348ccca74921fe5eea92bfc1d8\": container with ID starting with e695add69db303f7430b2b76ab9ac30c5c9f71348ccca74921fe5eea92bfc1d8 not found: ID does not exist" containerID="e695add69db303f7430b2b76ab9ac30c5c9f71348ccca74921fe5eea92bfc1d8" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.597845 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e695add69db303f7430b2b76ab9ac30c5c9f71348ccca74921fe5eea92bfc1d8"} err="failed to get container status \"e695add69db303f7430b2b76ab9ac30c5c9f71348ccca74921fe5eea92bfc1d8\": rpc error: code = NotFound desc = could not find container \"e695add69db303f7430b2b76ab9ac30c5c9f71348ccca74921fe5eea92bfc1d8\": container with ID starting with e695add69db303f7430b2b76ab9ac30c5c9f71348ccca74921fe5eea92bfc1d8 not found: ID does not exist" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.597862 4689 scope.go:117] "RemoveContainer" containerID="67ada391e320f77e0349bd3941ede24f59de0bae6d91c721a62f74c74979ee09" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.597895 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 10 12:38:17 crc kubenswrapper[4689]: E1210 12:38:17.598238 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67ada391e320f77e0349bd3941ede24f59de0bae6d91c721a62f74c74979ee09\": container with ID starting with 67ada391e320f77e0349bd3941ede24f59de0bae6d91c721a62f74c74979ee09 not found: ID does not exist" containerID="67ada391e320f77e0349bd3941ede24f59de0bae6d91c721a62f74c74979ee09" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.598282 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67ada391e320f77e0349bd3941ede24f59de0bae6d91c721a62f74c74979ee09"} err="failed to get container status \"67ada391e320f77e0349bd3941ede24f59de0bae6d91c721a62f74c74979ee09\": rpc error: code = NotFound desc = could not find container \"67ada391e320f77e0349bd3941ede24f59de0bae6d91c721a62f74c74979ee09\": container with ID starting with 67ada391e320f77e0349bd3941ede24f59de0bae6d91c721a62f74c74979ee09 not found: ID does not exist" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.598316 4689 scope.go:117] "RemoveContainer" containerID="5802c8c89ae5de342645beff196a48f8f345075b7eca9db4cfbd8925132aac52" Dec 10 12:38:17 crc kubenswrapper[4689]: E1210 12:38:17.599964 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5802c8c89ae5de342645beff196a48f8f345075b7eca9db4cfbd8925132aac52\": container with ID starting with 5802c8c89ae5de342645beff196a48f8f345075b7eca9db4cfbd8925132aac52 not found: ID does not exist" containerID="5802c8c89ae5de342645beff196a48f8f345075b7eca9db4cfbd8925132aac52" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.600015 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5802c8c89ae5de342645beff196a48f8f345075b7eca9db4cfbd8925132aac52"} err="failed to get container status \"5802c8c89ae5de342645beff196a48f8f345075b7eca9db4cfbd8925132aac52\": rpc error: code = NotFound desc = could not find container \"5802c8c89ae5de342645beff196a48f8f345075b7eca9db4cfbd8925132aac52\": container with ID starting with 5802c8c89ae5de342645beff196a48f8f345075b7eca9db4cfbd8925132aac52 not found: ID does not exist" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.600033 4689 scope.go:117] "RemoveContainer" containerID="230b36c100e96fb5b46bd18433c8cb6330ed4fe19e753b08c14dac2862f493cb" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.614035 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.614383 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.614836 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.628353 4689 scope.go:117] "RemoveContainer" containerID="154e9b9e5afe0a3f004c4d0a3a9993661d60dfe1395cddaf672b92044f9cfd02" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.635521 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnz59\" (UniqueName: \"kubernetes.io/projected/76352913-e683-46d9-962a-74510c4f8dd6-kube-api-access-fnz59\") pod \"nova-api-0\" (UID: \"76352913-e683-46d9-962a-74510c4f8dd6\") " pod="openstack/nova-api-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.635581 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4afb5506-86b9-45fc-9a95-a0336b141fdf-config-data\") pod \"ceilometer-0\" (UID: \"4afb5506-86b9-45fc-9a95-a0336b141fdf\") " pod="openstack/ceilometer-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.635598 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4afb5506-86b9-45fc-9a95-a0336b141fdf-run-httpd\") pod \"ceilometer-0\" (UID: \"4afb5506-86b9-45fc-9a95-a0336b141fdf\") " pod="openstack/ceilometer-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.635674 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76352913-e683-46d9-962a-74510c4f8dd6-public-tls-certs\") pod \"nova-api-0\" (UID: \"76352913-e683-46d9-962a-74510c4f8dd6\") " pod="openstack/nova-api-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.635707 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df75z\" (UniqueName: \"kubernetes.io/projected/4afb5506-86b9-45fc-9a95-a0336b141fdf-kube-api-access-df75z\") pod \"ceilometer-0\" (UID: \"4afb5506-86b9-45fc-9a95-a0336b141fdf\") " pod="openstack/ceilometer-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.635729 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76352913-e683-46d9-962a-74510c4f8dd6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"76352913-e683-46d9-962a-74510c4f8dd6\") " pod="openstack/nova-api-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.635747 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76352913-e683-46d9-962a-74510c4f8dd6-config-data\") pod \"nova-api-0\" (UID: \"76352913-e683-46d9-962a-74510c4f8dd6\") " pod="openstack/nova-api-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.635772 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4afb5506-86b9-45fc-9a95-a0336b141fdf-scripts\") pod \"ceilometer-0\" (UID: \"4afb5506-86b9-45fc-9a95-a0336b141fdf\") " pod="openstack/ceilometer-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.635808 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4afb5506-86b9-45fc-9a95-a0336b141fdf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4afb5506-86b9-45fc-9a95-a0336b141fdf\") " pod="openstack/ceilometer-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.635867 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4afb5506-86b9-45fc-9a95-a0336b141fdf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4afb5506-86b9-45fc-9a95-a0336b141fdf\") " pod="openstack/ceilometer-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.635892 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76352913-e683-46d9-962a-74510c4f8dd6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"76352913-e683-46d9-962a-74510c4f8dd6\") " pod="openstack/nova-api-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.635935 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4afb5506-86b9-45fc-9a95-a0336b141fdf-log-httpd\") pod \"ceilometer-0\" (UID: \"4afb5506-86b9-45fc-9a95-a0336b141fdf\") " pod="openstack/ceilometer-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.635964 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76352913-e683-46d9-962a-74510c4f8dd6-logs\") pod \"nova-api-0\" (UID: \"76352913-e683-46d9-962a-74510c4f8dd6\") " pod="openstack/nova-api-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.736894 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4afb5506-86b9-45fc-9a95-a0336b141fdf-scripts\") pod \"ceilometer-0\" (UID: \"4afb5506-86b9-45fc-9a95-a0336b141fdf\") " pod="openstack/ceilometer-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.737241 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4afb5506-86b9-45fc-9a95-a0336b141fdf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4afb5506-86b9-45fc-9a95-a0336b141fdf\") " pod="openstack/ceilometer-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.737294 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4afb5506-86b9-45fc-9a95-a0336b141fdf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4afb5506-86b9-45fc-9a95-a0336b141fdf\") " pod="openstack/ceilometer-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.737324 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76352913-e683-46d9-962a-74510c4f8dd6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"76352913-e683-46d9-962a-74510c4f8dd6\") " pod="openstack/nova-api-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.737360 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4afb5506-86b9-45fc-9a95-a0336b141fdf-log-httpd\") pod \"ceilometer-0\" (UID: \"4afb5506-86b9-45fc-9a95-a0336b141fdf\") " pod="openstack/ceilometer-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.737397 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76352913-e683-46d9-962a-74510c4f8dd6-logs\") pod \"nova-api-0\" (UID: \"76352913-e683-46d9-962a-74510c4f8dd6\") " pod="openstack/nova-api-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.737468 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnz59\" (UniqueName: \"kubernetes.io/projected/76352913-e683-46d9-962a-74510c4f8dd6-kube-api-access-fnz59\") pod \"nova-api-0\" (UID: \"76352913-e683-46d9-962a-74510c4f8dd6\") " pod="openstack/nova-api-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.737499 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4afb5506-86b9-45fc-9a95-a0336b141fdf-config-data\") pod \"ceilometer-0\" (UID: \"4afb5506-86b9-45fc-9a95-a0336b141fdf\") " pod="openstack/ceilometer-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.737517 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4afb5506-86b9-45fc-9a95-a0336b141fdf-run-httpd\") pod \"ceilometer-0\" (UID: \"4afb5506-86b9-45fc-9a95-a0336b141fdf\") " pod="openstack/ceilometer-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.737552 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76352913-e683-46d9-962a-74510c4f8dd6-public-tls-certs\") pod \"nova-api-0\" (UID: \"76352913-e683-46d9-962a-74510c4f8dd6\") " pod="openstack/nova-api-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.737579 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df75z\" (UniqueName: \"kubernetes.io/projected/4afb5506-86b9-45fc-9a95-a0336b141fdf-kube-api-access-df75z\") pod \"ceilometer-0\" (UID: \"4afb5506-86b9-45fc-9a95-a0336b141fdf\") " pod="openstack/ceilometer-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.737605 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76352913-e683-46d9-962a-74510c4f8dd6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"76352913-e683-46d9-962a-74510c4f8dd6\") " pod="openstack/nova-api-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.737627 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76352913-e683-46d9-962a-74510c4f8dd6-config-data\") pod \"nova-api-0\" (UID: \"76352913-e683-46d9-962a-74510c4f8dd6\") " pod="openstack/nova-api-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.737896 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4afb5506-86b9-45fc-9a95-a0336b141fdf-log-httpd\") pod \"ceilometer-0\" (UID: \"4afb5506-86b9-45fc-9a95-a0336b141fdf\") " pod="openstack/ceilometer-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.738248 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4afb5506-86b9-45fc-9a95-a0336b141fdf-run-httpd\") pod \"ceilometer-0\" (UID: \"4afb5506-86b9-45fc-9a95-a0336b141fdf\") " pod="openstack/ceilometer-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.738648 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76352913-e683-46d9-962a-74510c4f8dd6-logs\") pod \"nova-api-0\" (UID: \"76352913-e683-46d9-962a-74510c4f8dd6\") " pod="openstack/nova-api-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.742261 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76352913-e683-46d9-962a-74510c4f8dd6-config-data\") pod \"nova-api-0\" (UID: \"76352913-e683-46d9-962a-74510c4f8dd6\") " pod="openstack/nova-api-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.743566 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4afb5506-86b9-45fc-9a95-a0336b141fdf-scripts\") pod \"ceilometer-0\" (UID: \"4afb5506-86b9-45fc-9a95-a0336b141fdf\") " pod="openstack/ceilometer-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.743746 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4afb5506-86b9-45fc-9a95-a0336b141fdf-config-data\") pod \"ceilometer-0\" (UID: \"4afb5506-86b9-45fc-9a95-a0336b141fdf\") " pod="openstack/ceilometer-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.744601 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4afb5506-86b9-45fc-9a95-a0336b141fdf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4afb5506-86b9-45fc-9a95-a0336b141fdf\") " pod="openstack/ceilometer-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.747107 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76352913-e683-46d9-962a-74510c4f8dd6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"76352913-e683-46d9-962a-74510c4f8dd6\") " pod="openstack/nova-api-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.747345 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76352913-e683-46d9-962a-74510c4f8dd6-public-tls-certs\") pod \"nova-api-0\" (UID: \"76352913-e683-46d9-962a-74510c4f8dd6\") " pod="openstack/nova-api-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.747923 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4afb5506-86b9-45fc-9a95-a0336b141fdf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4afb5506-86b9-45fc-9a95-a0336b141fdf\") " pod="openstack/ceilometer-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.748667 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76352913-e683-46d9-962a-74510c4f8dd6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"76352913-e683-46d9-962a-74510c4f8dd6\") " pod="openstack/nova-api-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.758535 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnz59\" (UniqueName: \"kubernetes.io/projected/76352913-e683-46d9-962a-74510c4f8dd6-kube-api-access-fnz59\") pod \"nova-api-0\" (UID: \"76352913-e683-46d9-962a-74510c4f8dd6\") " pod="openstack/nova-api-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.758908 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df75z\" (UniqueName: \"kubernetes.io/projected/4afb5506-86b9-45fc-9a95-a0336b141fdf-kube-api-access-df75z\") pod \"ceilometer-0\" (UID: \"4afb5506-86b9-45fc-9a95-a0336b141fdf\") " pod="openstack/ceilometer-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.844939 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 12:38:17 crc kubenswrapper[4689]: I1210 12:38:17.919534 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 12:38:18 crc kubenswrapper[4689]: I1210 12:38:18.374692 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:38:18 crc kubenswrapper[4689]: I1210 12:38:18.434181 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4afb5506-86b9-45fc-9a95-a0336b141fdf","Type":"ContainerStarted","Data":"27bf0af067b51feb36723c5fec783ad75f5d66402ed478cf667d46c4bf142520"} Dec 10 12:38:18 crc kubenswrapper[4689]: I1210 12:38:18.490406 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 10 12:38:18 crc kubenswrapper[4689]: I1210 12:38:18.517932 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8812cbb-3130-4658-abfb-69c933628fcb" path="/var/lib/kubelet/pods/d8812cbb-3130-4658-abfb-69c933628fcb/volumes" Dec 10 12:38:18 crc kubenswrapper[4689]: I1210 12:38:18.519053 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef390315-58be-4db5-9456-eb2232ea12e4" path="/var/lib/kubelet/pods/ef390315-58be-4db5-9456-eb2232ea12e4/volumes" Dec 10 12:38:19 crc kubenswrapper[4689]: I1210 12:38:19.451248 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"76352913-e683-46d9-962a-74510c4f8dd6","Type":"ContainerStarted","Data":"848e8b3aa2449babc8c8a802032c6241cf1615669288ea9bbeaabd06e3c0549a"} Dec 10 12:38:19 crc kubenswrapper[4689]: I1210 12:38:19.451492 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"76352913-e683-46d9-962a-74510c4f8dd6","Type":"ContainerStarted","Data":"4a7a2f342504cb0765e8464a4285fa32852c1ed9dd6cb73bef59ed8a94cb65a0"} Dec 10 12:38:19 crc kubenswrapper[4689]: I1210 12:38:19.451506 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"76352913-e683-46d9-962a-74510c4f8dd6","Type":"ContainerStarted","Data":"40761ef51cd2410c859ed4e764c76d41c8f5c4bf07e4013757e52839701c0f2e"} Dec 10 12:38:19 crc kubenswrapper[4689]: I1210 12:38:19.470920 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.470902545 podStartE2EDuration="2.470902545s" podCreationTimestamp="2025-12-10 12:38:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:38:19.4678361 +0000 UTC m=+1367.255917238" watchObservedRunningTime="2025-12-10 12:38:19.470902545 +0000 UTC m=+1367.258983683" Dec 10 12:38:20 crc kubenswrapper[4689]: I1210 12:38:20.209161 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59cf4bdb65-m75sl" Dec 10 12:38:20 crc kubenswrapper[4689]: I1210 12:38:20.265488 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-sfjml"] Dec 10 12:38:20 crc kubenswrapper[4689]: I1210 12:38:20.265709 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-845d6d6f59-sfjml" podUID="012adfdf-2b09-4e23-8124-064ad6c6f712" containerName="dnsmasq-dns" containerID="cri-o://c7efc46bb352d890990352c44e0e73948ca465b242dea0b07686f1ad6edee81a" gracePeriod=10 Dec 10 12:38:20 crc kubenswrapper[4689]: I1210 12:38:20.477801 4689 generic.go:334] "Generic (PLEG): container finished" podID="012adfdf-2b09-4e23-8124-064ad6c6f712" containerID="c7efc46bb352d890990352c44e0e73948ca465b242dea0b07686f1ad6edee81a" exitCode=0 Dec 10 12:38:20 crc kubenswrapper[4689]: I1210 12:38:20.477873 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-sfjml" event={"ID":"012adfdf-2b09-4e23-8124-064ad6c6f712","Type":"ContainerDied","Data":"c7efc46bb352d890990352c44e0e73948ca465b242dea0b07686f1ad6edee81a"} Dec 10 12:38:20 crc kubenswrapper[4689]: I1210 12:38:20.479920 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4afb5506-86b9-45fc-9a95-a0336b141fdf","Type":"ContainerStarted","Data":"71366f32f1e3594b2d26845c8b2c3ad5847b3af34092cbdf610384ffcc7994d9"} Dec 10 12:38:20 crc kubenswrapper[4689]: I1210 12:38:20.831436 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-sfjml" Dec 10 12:38:20 crc kubenswrapper[4689]: I1210 12:38:20.927316 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/012adfdf-2b09-4e23-8124-064ad6c6f712-dns-svc\") pod \"012adfdf-2b09-4e23-8124-064ad6c6f712\" (UID: \"012adfdf-2b09-4e23-8124-064ad6c6f712\") " Dec 10 12:38:20 crc kubenswrapper[4689]: I1210 12:38:20.927571 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqpsg\" (UniqueName: \"kubernetes.io/projected/012adfdf-2b09-4e23-8124-064ad6c6f712-kube-api-access-gqpsg\") pod \"012adfdf-2b09-4e23-8124-064ad6c6f712\" (UID: \"012adfdf-2b09-4e23-8124-064ad6c6f712\") " Dec 10 12:38:20 crc kubenswrapper[4689]: I1210 12:38:20.927694 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/012adfdf-2b09-4e23-8124-064ad6c6f712-dns-swift-storage-0\") pod \"012adfdf-2b09-4e23-8124-064ad6c6f712\" (UID: \"012adfdf-2b09-4e23-8124-064ad6c6f712\") " Dec 10 12:38:20 crc kubenswrapper[4689]: I1210 12:38:20.927773 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/012adfdf-2b09-4e23-8124-064ad6c6f712-ovsdbserver-nb\") pod \"012adfdf-2b09-4e23-8124-064ad6c6f712\" (UID: \"012adfdf-2b09-4e23-8124-064ad6c6f712\") " Dec 10 12:38:20 crc kubenswrapper[4689]: I1210 12:38:20.927824 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/012adfdf-2b09-4e23-8124-064ad6c6f712-ovsdbserver-sb\") pod \"012adfdf-2b09-4e23-8124-064ad6c6f712\" (UID: \"012adfdf-2b09-4e23-8124-064ad6c6f712\") " Dec 10 12:38:20 crc kubenswrapper[4689]: I1210 12:38:20.928007 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/012adfdf-2b09-4e23-8124-064ad6c6f712-config\") pod \"012adfdf-2b09-4e23-8124-064ad6c6f712\" (UID: \"012adfdf-2b09-4e23-8124-064ad6c6f712\") " Dec 10 12:38:20 crc kubenswrapper[4689]: I1210 12:38:20.933219 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/012adfdf-2b09-4e23-8124-064ad6c6f712-kube-api-access-gqpsg" (OuterVolumeSpecName: "kube-api-access-gqpsg") pod "012adfdf-2b09-4e23-8124-064ad6c6f712" (UID: "012adfdf-2b09-4e23-8124-064ad6c6f712"). InnerVolumeSpecName "kube-api-access-gqpsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:38:20 crc kubenswrapper[4689]: I1210 12:38:20.986456 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/012adfdf-2b09-4e23-8124-064ad6c6f712-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "012adfdf-2b09-4e23-8124-064ad6c6f712" (UID: "012adfdf-2b09-4e23-8124-064ad6c6f712"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:38:20 crc kubenswrapper[4689]: I1210 12:38:20.987598 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/012adfdf-2b09-4e23-8124-064ad6c6f712-config" (OuterVolumeSpecName: "config") pod "012adfdf-2b09-4e23-8124-064ad6c6f712" (UID: "012adfdf-2b09-4e23-8124-064ad6c6f712"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:38:20 crc kubenswrapper[4689]: I1210 12:38:20.990572 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/012adfdf-2b09-4e23-8124-064ad6c6f712-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "012adfdf-2b09-4e23-8124-064ad6c6f712" (UID: "012adfdf-2b09-4e23-8124-064ad6c6f712"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:38:20 crc kubenswrapper[4689]: I1210 12:38:20.994641 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/012adfdf-2b09-4e23-8124-064ad6c6f712-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "012adfdf-2b09-4e23-8124-064ad6c6f712" (UID: "012adfdf-2b09-4e23-8124-064ad6c6f712"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:38:20 crc kubenswrapper[4689]: I1210 12:38:20.997492 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/012adfdf-2b09-4e23-8124-064ad6c6f712-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "012adfdf-2b09-4e23-8124-064ad6c6f712" (UID: "012adfdf-2b09-4e23-8124-064ad6c6f712"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:38:21 crc kubenswrapper[4689]: I1210 12:38:21.029935 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqpsg\" (UniqueName: \"kubernetes.io/projected/012adfdf-2b09-4e23-8124-064ad6c6f712-kube-api-access-gqpsg\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:21 crc kubenswrapper[4689]: I1210 12:38:21.029982 4689 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/012adfdf-2b09-4e23-8124-064ad6c6f712-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:21 crc kubenswrapper[4689]: I1210 12:38:21.029992 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/012adfdf-2b09-4e23-8124-064ad6c6f712-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:21 crc kubenswrapper[4689]: I1210 12:38:21.030000 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/012adfdf-2b09-4e23-8124-064ad6c6f712-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:21 crc kubenswrapper[4689]: I1210 12:38:21.030011 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/012adfdf-2b09-4e23-8124-064ad6c6f712-config\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:21 crc kubenswrapper[4689]: I1210 12:38:21.030019 4689 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/012adfdf-2b09-4e23-8124-064ad6c6f712-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:21 crc kubenswrapper[4689]: I1210 12:38:21.491954 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4afb5506-86b9-45fc-9a95-a0336b141fdf","Type":"ContainerStarted","Data":"c6d4e0648bb894bd1439e384a23e5c1d30fc6435f3b1b7f847fbfc73e888ecf7"} Dec 10 12:38:21 crc kubenswrapper[4689]: I1210 12:38:21.493659 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-sfjml" event={"ID":"012adfdf-2b09-4e23-8124-064ad6c6f712","Type":"ContainerDied","Data":"57cbc83a5735c4e25f99f8c3844a652fd43b4179c83e9c8176f3d67cf644498c"} Dec 10 12:38:21 crc kubenswrapper[4689]: I1210 12:38:21.493694 4689 scope.go:117] "RemoveContainer" containerID="c7efc46bb352d890990352c44e0e73948ca465b242dea0b07686f1ad6edee81a" Dec 10 12:38:21 crc kubenswrapper[4689]: I1210 12:38:21.493853 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-sfjml" Dec 10 12:38:21 crc kubenswrapper[4689]: I1210 12:38:21.526477 4689 scope.go:117] "RemoveContainer" containerID="af1a8366266b178983ec8c98213bf6f85098adbe1a2a38e3fd52dc517b526724" Dec 10 12:38:21 crc kubenswrapper[4689]: I1210 12:38:21.545377 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-sfjml"] Dec 10 12:38:21 crc kubenswrapper[4689]: I1210 12:38:21.562631 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-sfjml"] Dec 10 12:38:21 crc kubenswrapper[4689]: I1210 12:38:21.878363 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:38:21 crc kubenswrapper[4689]: I1210 12:38:21.901503 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:38:22 crc kubenswrapper[4689]: I1210 12:38:22.521364 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="012adfdf-2b09-4e23-8124-064ad6c6f712" path="/var/lib/kubelet/pods/012adfdf-2b09-4e23-8124-064ad6c6f712/volumes" Dec 10 12:38:22 crc kubenswrapper[4689]: I1210 12:38:22.522346 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4afb5506-86b9-45fc-9a95-a0336b141fdf","Type":"ContainerStarted","Data":"fe76e003eb9df787fc8f5c7305367b5f2944cb351553c59f3e08cb941a45de8d"} Dec 10 12:38:22 crc kubenswrapper[4689]: I1210 12:38:22.542546 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 10 12:38:22 crc kubenswrapper[4689]: I1210 12:38:22.879023 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-stmjh"] Dec 10 12:38:22 crc kubenswrapper[4689]: E1210 12:38:22.879649 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="012adfdf-2b09-4e23-8124-064ad6c6f712" containerName="init" Dec 10 12:38:22 crc kubenswrapper[4689]: I1210 12:38:22.879670 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="012adfdf-2b09-4e23-8124-064ad6c6f712" containerName="init" Dec 10 12:38:22 crc kubenswrapper[4689]: E1210 12:38:22.879687 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="012adfdf-2b09-4e23-8124-064ad6c6f712" containerName="dnsmasq-dns" Dec 10 12:38:22 crc kubenswrapper[4689]: I1210 12:38:22.879694 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="012adfdf-2b09-4e23-8124-064ad6c6f712" containerName="dnsmasq-dns" Dec 10 12:38:22 crc kubenswrapper[4689]: I1210 12:38:22.879887 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="012adfdf-2b09-4e23-8124-064ad6c6f712" containerName="dnsmasq-dns" Dec 10 12:38:22 crc kubenswrapper[4689]: I1210 12:38:22.880526 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-stmjh" Dec 10 12:38:22 crc kubenswrapper[4689]: I1210 12:38:22.884221 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 10 12:38:22 crc kubenswrapper[4689]: I1210 12:38:22.884587 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 10 12:38:22 crc kubenswrapper[4689]: I1210 12:38:22.896370 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-stmjh"] Dec 10 12:38:22 crc kubenswrapper[4689]: I1210 12:38:22.971696 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc9m2\" (UniqueName: \"kubernetes.io/projected/7c9e105c-a2da-493c-ae4a-ad81b18dea23-kube-api-access-bc9m2\") pod \"nova-cell1-cell-mapping-stmjh\" (UID: \"7c9e105c-a2da-493c-ae4a-ad81b18dea23\") " pod="openstack/nova-cell1-cell-mapping-stmjh" Dec 10 12:38:22 crc kubenswrapper[4689]: I1210 12:38:22.971752 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c9e105c-a2da-493c-ae4a-ad81b18dea23-scripts\") pod \"nova-cell1-cell-mapping-stmjh\" (UID: \"7c9e105c-a2da-493c-ae4a-ad81b18dea23\") " pod="openstack/nova-cell1-cell-mapping-stmjh" Dec 10 12:38:22 crc kubenswrapper[4689]: I1210 12:38:22.971807 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c9e105c-a2da-493c-ae4a-ad81b18dea23-config-data\") pod \"nova-cell1-cell-mapping-stmjh\" (UID: \"7c9e105c-a2da-493c-ae4a-ad81b18dea23\") " pod="openstack/nova-cell1-cell-mapping-stmjh" Dec 10 12:38:22 crc kubenswrapper[4689]: I1210 12:38:22.971863 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c9e105c-a2da-493c-ae4a-ad81b18dea23-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-stmjh\" (UID: \"7c9e105c-a2da-493c-ae4a-ad81b18dea23\") " pod="openstack/nova-cell1-cell-mapping-stmjh" Dec 10 12:38:23 crc kubenswrapper[4689]: I1210 12:38:23.073838 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c9e105c-a2da-493c-ae4a-ad81b18dea23-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-stmjh\" (UID: \"7c9e105c-a2da-493c-ae4a-ad81b18dea23\") " pod="openstack/nova-cell1-cell-mapping-stmjh" Dec 10 12:38:23 crc kubenswrapper[4689]: I1210 12:38:23.073989 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc9m2\" (UniqueName: \"kubernetes.io/projected/7c9e105c-a2da-493c-ae4a-ad81b18dea23-kube-api-access-bc9m2\") pod \"nova-cell1-cell-mapping-stmjh\" (UID: \"7c9e105c-a2da-493c-ae4a-ad81b18dea23\") " pod="openstack/nova-cell1-cell-mapping-stmjh" Dec 10 12:38:23 crc kubenswrapper[4689]: I1210 12:38:23.074023 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c9e105c-a2da-493c-ae4a-ad81b18dea23-scripts\") pod \"nova-cell1-cell-mapping-stmjh\" (UID: \"7c9e105c-a2da-493c-ae4a-ad81b18dea23\") " pod="openstack/nova-cell1-cell-mapping-stmjh" Dec 10 12:38:23 crc kubenswrapper[4689]: I1210 12:38:23.074056 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c9e105c-a2da-493c-ae4a-ad81b18dea23-config-data\") pod \"nova-cell1-cell-mapping-stmjh\" (UID: \"7c9e105c-a2da-493c-ae4a-ad81b18dea23\") " pod="openstack/nova-cell1-cell-mapping-stmjh" Dec 10 12:38:23 crc kubenswrapper[4689]: I1210 12:38:23.078544 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c9e105c-a2da-493c-ae4a-ad81b18dea23-config-data\") pod \"nova-cell1-cell-mapping-stmjh\" (UID: \"7c9e105c-a2da-493c-ae4a-ad81b18dea23\") " pod="openstack/nova-cell1-cell-mapping-stmjh" Dec 10 12:38:23 crc kubenswrapper[4689]: I1210 12:38:23.082441 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c9e105c-a2da-493c-ae4a-ad81b18dea23-scripts\") pod \"nova-cell1-cell-mapping-stmjh\" (UID: \"7c9e105c-a2da-493c-ae4a-ad81b18dea23\") " pod="openstack/nova-cell1-cell-mapping-stmjh" Dec 10 12:38:23 crc kubenswrapper[4689]: I1210 12:38:23.082554 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c9e105c-a2da-493c-ae4a-ad81b18dea23-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-stmjh\" (UID: \"7c9e105c-a2da-493c-ae4a-ad81b18dea23\") " pod="openstack/nova-cell1-cell-mapping-stmjh" Dec 10 12:38:23 crc kubenswrapper[4689]: I1210 12:38:23.091563 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc9m2\" (UniqueName: \"kubernetes.io/projected/7c9e105c-a2da-493c-ae4a-ad81b18dea23-kube-api-access-bc9m2\") pod \"nova-cell1-cell-mapping-stmjh\" (UID: \"7c9e105c-a2da-493c-ae4a-ad81b18dea23\") " pod="openstack/nova-cell1-cell-mapping-stmjh" Dec 10 12:38:23 crc kubenswrapper[4689]: I1210 12:38:23.207303 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-stmjh" Dec 10 12:38:23 crc kubenswrapper[4689]: I1210 12:38:23.520983 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4afb5506-86b9-45fc-9a95-a0336b141fdf","Type":"ContainerStarted","Data":"606c1f90985c57574aa73e9820a13f30555040f6c2438c9450cbad7c37324ea3"} Dec 10 12:38:23 crc kubenswrapper[4689]: I1210 12:38:23.521534 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 10 12:38:23 crc kubenswrapper[4689]: I1210 12:38:23.549118 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.199468728 podStartE2EDuration="6.549099778s" podCreationTimestamp="2025-12-10 12:38:17 +0000 UTC" firstStartedPulling="2025-12-10 12:38:18.370904006 +0000 UTC m=+1366.158985134" lastFinishedPulling="2025-12-10 12:38:22.720535036 +0000 UTC m=+1370.508616184" observedRunningTime="2025-12-10 12:38:23.546220997 +0000 UTC m=+1371.334302155" watchObservedRunningTime="2025-12-10 12:38:23.549099778 +0000 UTC m=+1371.337180916" Dec 10 12:38:23 crc kubenswrapper[4689]: W1210 12:38:23.738323 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c9e105c_a2da_493c_ae4a_ad81b18dea23.slice/crio-1a6919325f3c3734655df37b8cdf9ead83c0ce8f3cb81b7e8900f78d5775d24e WatchSource:0}: Error finding container 1a6919325f3c3734655df37b8cdf9ead83c0ce8f3cb81b7e8900f78d5775d24e: Status 404 returned error can't find the container with id 1a6919325f3c3734655df37b8cdf9ead83c0ce8f3cb81b7e8900f78d5775d24e Dec 10 12:38:23 crc kubenswrapper[4689]: I1210 12:38:23.747415 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-stmjh"] Dec 10 12:38:24 crc kubenswrapper[4689]: I1210 12:38:24.540853 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-stmjh" event={"ID":"7c9e105c-a2da-493c-ae4a-ad81b18dea23","Type":"ContainerStarted","Data":"57464ded9ffcf5d191527e5861c612c4c133238d0605182b10fa483b262f6073"} Dec 10 12:38:24 crc kubenswrapper[4689]: I1210 12:38:24.541210 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-stmjh" event={"ID":"7c9e105c-a2da-493c-ae4a-ad81b18dea23","Type":"ContainerStarted","Data":"1a6919325f3c3734655df37b8cdf9ead83c0ce8f3cb81b7e8900f78d5775d24e"} Dec 10 12:38:24 crc kubenswrapper[4689]: I1210 12:38:24.569270 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-stmjh" podStartSLOduration=2.5692480509999998 podStartE2EDuration="2.569248051s" podCreationTimestamp="2025-12-10 12:38:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:38:24.56192856 +0000 UTC m=+1372.350009698" watchObservedRunningTime="2025-12-10 12:38:24.569248051 +0000 UTC m=+1372.357329199" Dec 10 12:38:27 crc kubenswrapper[4689]: I1210 12:38:27.919711 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 10 12:38:27 crc kubenswrapper[4689]: I1210 12:38:27.920295 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 10 12:38:28 crc kubenswrapper[4689]: I1210 12:38:28.935378 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="76352913-e683-46d9-962a-74510c4f8dd6" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 12:38:28 crc kubenswrapper[4689]: I1210 12:38:28.935398 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="76352913-e683-46d9-962a-74510c4f8dd6" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 12:38:29 crc kubenswrapper[4689]: I1210 12:38:29.597949 4689 generic.go:334] "Generic (PLEG): container finished" podID="7c9e105c-a2da-493c-ae4a-ad81b18dea23" containerID="57464ded9ffcf5d191527e5861c612c4c133238d0605182b10fa483b262f6073" exitCode=0 Dec 10 12:38:29 crc kubenswrapper[4689]: I1210 12:38:29.598284 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-stmjh" event={"ID":"7c9e105c-a2da-493c-ae4a-ad81b18dea23","Type":"ContainerDied","Data":"57464ded9ffcf5d191527e5861c612c4c133238d0605182b10fa483b262f6073"} Dec 10 12:38:30 crc kubenswrapper[4689]: I1210 12:38:30.970932 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-stmjh" Dec 10 12:38:31 crc kubenswrapper[4689]: I1210 12:38:31.044142 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc9m2\" (UniqueName: \"kubernetes.io/projected/7c9e105c-a2da-493c-ae4a-ad81b18dea23-kube-api-access-bc9m2\") pod \"7c9e105c-a2da-493c-ae4a-ad81b18dea23\" (UID: \"7c9e105c-a2da-493c-ae4a-ad81b18dea23\") " Dec 10 12:38:31 crc kubenswrapper[4689]: I1210 12:38:31.044210 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c9e105c-a2da-493c-ae4a-ad81b18dea23-config-data\") pod \"7c9e105c-a2da-493c-ae4a-ad81b18dea23\" (UID: \"7c9e105c-a2da-493c-ae4a-ad81b18dea23\") " Dec 10 12:38:31 crc kubenswrapper[4689]: I1210 12:38:31.044282 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c9e105c-a2da-493c-ae4a-ad81b18dea23-scripts\") pod \"7c9e105c-a2da-493c-ae4a-ad81b18dea23\" (UID: \"7c9e105c-a2da-493c-ae4a-ad81b18dea23\") " Dec 10 12:38:31 crc kubenswrapper[4689]: I1210 12:38:31.045202 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c9e105c-a2da-493c-ae4a-ad81b18dea23-combined-ca-bundle\") pod \"7c9e105c-a2da-493c-ae4a-ad81b18dea23\" (UID: \"7c9e105c-a2da-493c-ae4a-ad81b18dea23\") " Dec 10 12:38:31 crc kubenswrapper[4689]: I1210 12:38:31.058745 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c9e105c-a2da-493c-ae4a-ad81b18dea23-scripts" (OuterVolumeSpecName: "scripts") pod "7c9e105c-a2da-493c-ae4a-ad81b18dea23" (UID: "7c9e105c-a2da-493c-ae4a-ad81b18dea23"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:38:31 crc kubenswrapper[4689]: I1210 12:38:31.059257 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c9e105c-a2da-493c-ae4a-ad81b18dea23-kube-api-access-bc9m2" (OuterVolumeSpecName: "kube-api-access-bc9m2") pod "7c9e105c-a2da-493c-ae4a-ad81b18dea23" (UID: "7c9e105c-a2da-493c-ae4a-ad81b18dea23"). InnerVolumeSpecName "kube-api-access-bc9m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:38:31 crc kubenswrapper[4689]: I1210 12:38:31.079241 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c9e105c-a2da-493c-ae4a-ad81b18dea23-config-data" (OuterVolumeSpecName: "config-data") pod "7c9e105c-a2da-493c-ae4a-ad81b18dea23" (UID: "7c9e105c-a2da-493c-ae4a-ad81b18dea23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:38:31 crc kubenswrapper[4689]: I1210 12:38:31.082466 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c9e105c-a2da-493c-ae4a-ad81b18dea23-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c9e105c-a2da-493c-ae4a-ad81b18dea23" (UID: "7c9e105c-a2da-493c-ae4a-ad81b18dea23"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:38:31 crc kubenswrapper[4689]: I1210 12:38:31.147426 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc9m2\" (UniqueName: \"kubernetes.io/projected/7c9e105c-a2da-493c-ae4a-ad81b18dea23-kube-api-access-bc9m2\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:31 crc kubenswrapper[4689]: I1210 12:38:31.147459 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c9e105c-a2da-493c-ae4a-ad81b18dea23-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:31 crc kubenswrapper[4689]: I1210 12:38:31.147470 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c9e105c-a2da-493c-ae4a-ad81b18dea23-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:31 crc kubenswrapper[4689]: I1210 12:38:31.147478 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c9e105c-a2da-493c-ae4a-ad81b18dea23-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:31 crc kubenswrapper[4689]: I1210 12:38:31.643076 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-stmjh" event={"ID":"7c9e105c-a2da-493c-ae4a-ad81b18dea23","Type":"ContainerDied","Data":"1a6919325f3c3734655df37b8cdf9ead83c0ce8f3cb81b7e8900f78d5775d24e"} Dec 10 12:38:31 crc kubenswrapper[4689]: I1210 12:38:31.643139 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a6919325f3c3734655df37b8cdf9ead83c0ce8f3cb81b7e8900f78d5775d24e" Dec 10 12:38:31 crc kubenswrapper[4689]: I1210 12:38:31.643216 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-stmjh" Dec 10 12:38:31 crc kubenswrapper[4689]: I1210 12:38:31.809780 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 10 12:38:31 crc kubenswrapper[4689]: I1210 12:38:31.810069 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="76352913-e683-46d9-962a-74510c4f8dd6" containerName="nova-api-log" containerID="cri-o://4a7a2f342504cb0765e8464a4285fa32852c1ed9dd6cb73bef59ed8a94cb65a0" gracePeriod=30 Dec 10 12:38:31 crc kubenswrapper[4689]: I1210 12:38:31.810184 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="76352913-e683-46d9-962a-74510c4f8dd6" containerName="nova-api-api" containerID="cri-o://848e8b3aa2449babc8c8a802032c6241cf1615669288ea9bbeaabd06e3c0549a" gracePeriod=30 Dec 10 12:38:31 crc kubenswrapper[4689]: I1210 12:38:31.831999 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 12:38:31 crc kubenswrapper[4689]: I1210 12:38:31.832521 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="434ae256-23eb-408f-8b81-1956600d3c2f" containerName="nova-scheduler-scheduler" containerID="cri-o://c3accbd3e51c14c887c513588c356294ea81bde9df9cc45cfaec617749468b90" gracePeriod=30 Dec 10 12:38:31 crc kubenswrapper[4689]: I1210 12:38:31.875407 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 12:38:31 crc kubenswrapper[4689]: I1210 12:38:31.875681 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ff93cb1f-aee9-49d9-b6fd-ca96103ff881" containerName="nova-metadata-log" containerID="cri-o://40d3c5864a75d6a3716585681c27175b5751c35e23d6c1814072363e3f88f5a9" gracePeriod=30 Dec 10 12:38:31 crc kubenswrapper[4689]: I1210 12:38:31.876207 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ff93cb1f-aee9-49d9-b6fd-ca96103ff881" containerName="nova-metadata-metadata" containerID="cri-o://5967dad926ef9db71eff1c3a709a4feccf47bbc2bd7f90e880c11319d24d245c" gracePeriod=30 Dec 10 12:38:32 crc kubenswrapper[4689]: E1210 12:38:32.424435 4689 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c3accbd3e51c14c887c513588c356294ea81bde9df9cc45cfaec617749468b90" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 10 12:38:32 crc kubenswrapper[4689]: E1210 12:38:32.429263 4689 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c3accbd3e51c14c887c513588c356294ea81bde9df9cc45cfaec617749468b90" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 10 12:38:32 crc kubenswrapper[4689]: E1210 12:38:32.431054 4689 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c3accbd3e51c14c887c513588c356294ea81bde9df9cc45cfaec617749468b90" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 10 12:38:32 crc kubenswrapper[4689]: E1210 12:38:32.431115 4689 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="434ae256-23eb-408f-8b81-1956600d3c2f" containerName="nova-scheduler-scheduler" Dec 10 12:38:32 crc kubenswrapper[4689]: I1210 12:38:32.654163 4689 generic.go:334] "Generic (PLEG): container finished" podID="76352913-e683-46d9-962a-74510c4f8dd6" containerID="4a7a2f342504cb0765e8464a4285fa32852c1ed9dd6cb73bef59ed8a94cb65a0" exitCode=143 Dec 10 12:38:32 crc kubenswrapper[4689]: I1210 12:38:32.654481 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"76352913-e683-46d9-962a-74510c4f8dd6","Type":"ContainerDied","Data":"4a7a2f342504cb0765e8464a4285fa32852c1ed9dd6cb73bef59ed8a94cb65a0"} Dec 10 12:38:32 crc kubenswrapper[4689]: I1210 12:38:32.656649 4689 generic.go:334] "Generic (PLEG): container finished" podID="ff93cb1f-aee9-49d9-b6fd-ca96103ff881" containerID="40d3c5864a75d6a3716585681c27175b5751c35e23d6c1814072363e3f88f5a9" exitCode=143 Dec 10 12:38:32 crc kubenswrapper[4689]: I1210 12:38:32.656746 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff93cb1f-aee9-49d9-b6fd-ca96103ff881","Type":"ContainerDied","Data":"40d3c5864a75d6a3716585681c27175b5751c35e23d6c1814072363e3f88f5a9"} Dec 10 12:38:35 crc kubenswrapper[4689]: I1210 12:38:35.328838 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ff93cb1f-aee9-49d9-b6fd-ca96103ff881" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:35902->10.217.0.197:8775: read: connection reset by peer" Dec 10 12:38:35 crc kubenswrapper[4689]: I1210 12:38:35.328839 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ff93cb1f-aee9-49d9-b6fd-ca96103ff881" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:35912->10.217.0.197:8775: read: connection reset by peer" Dec 10 12:38:35 crc kubenswrapper[4689]: I1210 12:38:35.698780 4689 generic.go:334] "Generic (PLEG): container finished" podID="ff93cb1f-aee9-49d9-b6fd-ca96103ff881" containerID="5967dad926ef9db71eff1c3a709a4feccf47bbc2bd7f90e880c11319d24d245c" exitCode=0 Dec 10 12:38:35 crc kubenswrapper[4689]: I1210 12:38:35.698840 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff93cb1f-aee9-49d9-b6fd-ca96103ff881","Type":"ContainerDied","Data":"5967dad926ef9db71eff1c3a709a4feccf47bbc2bd7f90e880c11319d24d245c"} Dec 10 12:38:35 crc kubenswrapper[4689]: I1210 12:38:35.701659 4689 generic.go:334] "Generic (PLEG): container finished" podID="76352913-e683-46d9-962a-74510c4f8dd6" containerID="848e8b3aa2449babc8c8a802032c6241cf1615669288ea9bbeaabd06e3c0549a" exitCode=0 Dec 10 12:38:35 crc kubenswrapper[4689]: I1210 12:38:35.701685 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"76352913-e683-46d9-962a-74510c4f8dd6","Type":"ContainerDied","Data":"848e8b3aa2449babc8c8a802032c6241cf1615669288ea9bbeaabd06e3c0549a"} Dec 10 12:38:35 crc kubenswrapper[4689]: I1210 12:38:35.925229 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.039880 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnz59\" (UniqueName: \"kubernetes.io/projected/76352913-e683-46d9-962a-74510c4f8dd6-kube-api-access-fnz59\") pod \"76352913-e683-46d9-962a-74510c4f8dd6\" (UID: \"76352913-e683-46d9-962a-74510c4f8dd6\") " Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.040018 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76352913-e683-46d9-962a-74510c4f8dd6-internal-tls-certs\") pod \"76352913-e683-46d9-962a-74510c4f8dd6\" (UID: \"76352913-e683-46d9-962a-74510c4f8dd6\") " Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.040980 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76352913-e683-46d9-962a-74510c4f8dd6-combined-ca-bundle\") pod \"76352913-e683-46d9-962a-74510c4f8dd6\" (UID: \"76352913-e683-46d9-962a-74510c4f8dd6\") " Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.041028 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76352913-e683-46d9-962a-74510c4f8dd6-logs\") pod \"76352913-e683-46d9-962a-74510c4f8dd6\" (UID: \"76352913-e683-46d9-962a-74510c4f8dd6\") " Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.041133 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76352913-e683-46d9-962a-74510c4f8dd6-config-data\") pod \"76352913-e683-46d9-962a-74510c4f8dd6\" (UID: \"76352913-e683-46d9-962a-74510c4f8dd6\") " Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.041165 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76352913-e683-46d9-962a-74510c4f8dd6-public-tls-certs\") pod \"76352913-e683-46d9-962a-74510c4f8dd6\" (UID: \"76352913-e683-46d9-962a-74510c4f8dd6\") " Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.041542 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76352913-e683-46d9-962a-74510c4f8dd6-logs" (OuterVolumeSpecName: "logs") pod "76352913-e683-46d9-962a-74510c4f8dd6" (UID: "76352913-e683-46d9-962a-74510c4f8dd6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.042033 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76352913-e683-46d9-962a-74510c4f8dd6-logs\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.046269 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76352913-e683-46d9-962a-74510c4f8dd6-kube-api-access-fnz59" (OuterVolumeSpecName: "kube-api-access-fnz59") pod "76352913-e683-46d9-962a-74510c4f8dd6" (UID: "76352913-e683-46d9-962a-74510c4f8dd6"). InnerVolumeSpecName "kube-api-access-fnz59". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.072089 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76352913-e683-46d9-962a-74510c4f8dd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76352913-e683-46d9-962a-74510c4f8dd6" (UID: "76352913-e683-46d9-962a-74510c4f8dd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.101625 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76352913-e683-46d9-962a-74510c4f8dd6-config-data" (OuterVolumeSpecName: "config-data") pod "76352913-e683-46d9-962a-74510c4f8dd6" (UID: "76352913-e683-46d9-962a-74510c4f8dd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.109677 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76352913-e683-46d9-962a-74510c4f8dd6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "76352913-e683-46d9-962a-74510c4f8dd6" (UID: "76352913-e683-46d9-962a-74510c4f8dd6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.119358 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76352913-e683-46d9-962a-74510c4f8dd6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "76352913-e683-46d9-962a-74510c4f8dd6" (UID: "76352913-e683-46d9-962a-74510c4f8dd6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.143537 4689 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76352913-e683-46d9-962a-74510c4f8dd6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.143572 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnz59\" (UniqueName: \"kubernetes.io/projected/76352913-e683-46d9-962a-74510c4f8dd6-kube-api-access-fnz59\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.143583 4689 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76352913-e683-46d9-962a-74510c4f8dd6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.143592 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76352913-e683-46d9-962a-74510c4f8dd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.143601 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76352913-e683-46d9-962a-74510c4f8dd6-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.282368 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.347306 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjjmg\" (UniqueName: \"kubernetes.io/projected/ff93cb1f-aee9-49d9-b6fd-ca96103ff881-kube-api-access-pjjmg\") pod \"ff93cb1f-aee9-49d9-b6fd-ca96103ff881\" (UID: \"ff93cb1f-aee9-49d9-b6fd-ca96103ff881\") " Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.347417 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff93cb1f-aee9-49d9-b6fd-ca96103ff881-config-data\") pod \"ff93cb1f-aee9-49d9-b6fd-ca96103ff881\" (UID: \"ff93cb1f-aee9-49d9-b6fd-ca96103ff881\") " Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.347462 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93cb1f-aee9-49d9-b6fd-ca96103ff881-combined-ca-bundle\") pod \"ff93cb1f-aee9-49d9-b6fd-ca96103ff881\" (UID: \"ff93cb1f-aee9-49d9-b6fd-ca96103ff881\") " Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.347578 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff93cb1f-aee9-49d9-b6fd-ca96103ff881-logs\") pod \"ff93cb1f-aee9-49d9-b6fd-ca96103ff881\" (UID: \"ff93cb1f-aee9-49d9-b6fd-ca96103ff881\") " Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.347628 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff93cb1f-aee9-49d9-b6fd-ca96103ff881-nova-metadata-tls-certs\") pod \"ff93cb1f-aee9-49d9-b6fd-ca96103ff881\" (UID: \"ff93cb1f-aee9-49d9-b6fd-ca96103ff881\") " Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.348520 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff93cb1f-aee9-49d9-b6fd-ca96103ff881-logs" (OuterVolumeSpecName: "logs") pod "ff93cb1f-aee9-49d9-b6fd-ca96103ff881" (UID: "ff93cb1f-aee9-49d9-b6fd-ca96103ff881"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.348820 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff93cb1f-aee9-49d9-b6fd-ca96103ff881-logs\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.353213 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff93cb1f-aee9-49d9-b6fd-ca96103ff881-kube-api-access-pjjmg" (OuterVolumeSpecName: "kube-api-access-pjjmg") pod "ff93cb1f-aee9-49d9-b6fd-ca96103ff881" (UID: "ff93cb1f-aee9-49d9-b6fd-ca96103ff881"). InnerVolumeSpecName "kube-api-access-pjjmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.381372 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff93cb1f-aee9-49d9-b6fd-ca96103ff881-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff93cb1f-aee9-49d9-b6fd-ca96103ff881" (UID: "ff93cb1f-aee9-49d9-b6fd-ca96103ff881"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.409706 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff93cb1f-aee9-49d9-b6fd-ca96103ff881-config-data" (OuterVolumeSpecName: "config-data") pod "ff93cb1f-aee9-49d9-b6fd-ca96103ff881" (UID: "ff93cb1f-aee9-49d9-b6fd-ca96103ff881"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.419184 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff93cb1f-aee9-49d9-b6fd-ca96103ff881-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ff93cb1f-aee9-49d9-b6fd-ca96103ff881" (UID: "ff93cb1f-aee9-49d9-b6fd-ca96103ff881"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.450226 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjjmg\" (UniqueName: \"kubernetes.io/projected/ff93cb1f-aee9-49d9-b6fd-ca96103ff881-kube-api-access-pjjmg\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.450271 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff93cb1f-aee9-49d9-b6fd-ca96103ff881-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.450282 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93cb1f-aee9-49d9-b6fd-ca96103ff881-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.450322 4689 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff93cb1f-aee9-49d9-b6fd-ca96103ff881-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.721667 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff93cb1f-aee9-49d9-b6fd-ca96103ff881","Type":"ContainerDied","Data":"90ded19233c2c3489fbcd5bc1ff0ee36a2812cadd73b51215b2fe451b80f2ab0"} Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.721728 4689 scope.go:117] "RemoveContainer" containerID="5967dad926ef9db71eff1c3a709a4feccf47bbc2bd7f90e880c11319d24d245c" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.721866 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.731571 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"76352913-e683-46d9-962a-74510c4f8dd6","Type":"ContainerDied","Data":"40761ef51cd2410c859ed4e764c76d41c8f5c4bf07e4013757e52839701c0f2e"} Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.731633 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.733787 4689 generic.go:334] "Generic (PLEG): container finished" podID="434ae256-23eb-408f-8b81-1956600d3c2f" containerID="c3accbd3e51c14c887c513588c356294ea81bde9df9cc45cfaec617749468b90" exitCode=0 Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.733813 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"434ae256-23eb-408f-8b81-1956600d3c2f","Type":"ContainerDied","Data":"c3accbd3e51c14c887c513588c356294ea81bde9df9cc45cfaec617749468b90"} Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.753230 4689 scope.go:117] "RemoveContainer" containerID="40d3c5864a75d6a3716585681c27175b5751c35e23d6c1814072363e3f88f5a9" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.768793 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.788453 4689 scope.go:117] "RemoveContainer" containerID="848e8b3aa2449babc8c8a802032c6241cf1615669288ea9bbeaabd06e3c0549a" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.788931 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.804136 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.819120 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.828033 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 10 12:38:36 crc kubenswrapper[4689]: E1210 12:38:36.828557 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c9e105c-a2da-493c-ae4a-ad81b18dea23" containerName="nova-manage" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.828577 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c9e105c-a2da-493c-ae4a-ad81b18dea23" containerName="nova-manage" Dec 10 12:38:36 crc kubenswrapper[4689]: E1210 12:38:36.828602 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff93cb1f-aee9-49d9-b6fd-ca96103ff881" containerName="nova-metadata-metadata" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.828609 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff93cb1f-aee9-49d9-b6fd-ca96103ff881" containerName="nova-metadata-metadata" Dec 10 12:38:36 crc kubenswrapper[4689]: E1210 12:38:36.828627 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76352913-e683-46d9-962a-74510c4f8dd6" containerName="nova-api-api" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.828635 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="76352913-e683-46d9-962a-74510c4f8dd6" containerName="nova-api-api" Dec 10 12:38:36 crc kubenswrapper[4689]: E1210 12:38:36.828648 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff93cb1f-aee9-49d9-b6fd-ca96103ff881" containerName="nova-metadata-log" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.828655 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff93cb1f-aee9-49d9-b6fd-ca96103ff881" containerName="nova-metadata-log" Dec 10 12:38:36 crc kubenswrapper[4689]: E1210 12:38:36.828668 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76352913-e683-46d9-962a-74510c4f8dd6" containerName="nova-api-log" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.828677 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="76352913-e683-46d9-962a-74510c4f8dd6" containerName="nova-api-log" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.828849 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff93cb1f-aee9-49d9-b6fd-ca96103ff881" containerName="nova-metadata-log" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.828861 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="76352913-e683-46d9-962a-74510c4f8dd6" containerName="nova-api-log" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.828877 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff93cb1f-aee9-49d9-b6fd-ca96103ff881" containerName="nova-metadata-metadata" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.828890 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c9e105c-a2da-493c-ae4a-ad81b18dea23" containerName="nova-manage" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.828896 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="76352913-e683-46d9-962a-74510c4f8dd6" containerName="nova-api-api" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.830181 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.831726 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.831929 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.836814 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.838297 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.840879 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.841164 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.841304 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.852040 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.856365 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9ec0888-b017-49bb-b11e-feb543a1db7e-config-data\") pod \"nova-api-0\" (UID: \"f9ec0888-b017-49bb-b11e-feb543a1db7e\") " pod="openstack/nova-api-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.856406 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11d4f9aa-dff6-4df0-9f6e-ead4097006a0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"11d4f9aa-dff6-4df0-9f6e-ead4097006a0\") " pod="openstack/nova-metadata-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.856428 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9ec0888-b017-49bb-b11e-feb543a1db7e-logs\") pod \"nova-api-0\" (UID: \"f9ec0888-b017-49bb-b11e-feb543a1db7e\") " pod="openstack/nova-api-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.856451 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11d4f9aa-dff6-4df0-9f6e-ead4097006a0-logs\") pod \"nova-metadata-0\" (UID: \"11d4f9aa-dff6-4df0-9f6e-ead4097006a0\") " pod="openstack/nova-metadata-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.856485 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/11d4f9aa-dff6-4df0-9f6e-ead4097006a0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"11d4f9aa-dff6-4df0-9f6e-ead4097006a0\") " pod="openstack/nova-metadata-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.856529 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvdjr\" (UniqueName: \"kubernetes.io/projected/f9ec0888-b017-49bb-b11e-feb543a1db7e-kube-api-access-rvdjr\") pod \"nova-api-0\" (UID: \"f9ec0888-b017-49bb-b11e-feb543a1db7e\") " pod="openstack/nova-api-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.856545 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s84dg\" (UniqueName: \"kubernetes.io/projected/11d4f9aa-dff6-4df0-9f6e-ead4097006a0-kube-api-access-s84dg\") pod \"nova-metadata-0\" (UID: \"11d4f9aa-dff6-4df0-9f6e-ead4097006a0\") " pod="openstack/nova-metadata-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.856585 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ec0888-b017-49bb-b11e-feb543a1db7e-public-tls-certs\") pod \"nova-api-0\" (UID: \"f9ec0888-b017-49bb-b11e-feb543a1db7e\") " pod="openstack/nova-api-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.856621 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11d4f9aa-dff6-4df0-9f6e-ead4097006a0-config-data\") pod \"nova-metadata-0\" (UID: \"11d4f9aa-dff6-4df0-9f6e-ead4097006a0\") " pod="openstack/nova-metadata-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.856647 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ec0888-b017-49bb-b11e-feb543a1db7e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f9ec0888-b017-49bb-b11e-feb543a1db7e\") " pod="openstack/nova-api-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.856665 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ec0888-b017-49bb-b11e-feb543a1db7e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f9ec0888-b017-49bb-b11e-feb543a1db7e\") " pod="openstack/nova-api-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.859105 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.895652 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.900111 4689 scope.go:117] "RemoveContainer" containerID="4a7a2f342504cb0765e8464a4285fa32852c1ed9dd6cb73bef59ed8a94cb65a0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.958544 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/434ae256-23eb-408f-8b81-1956600d3c2f-combined-ca-bundle\") pod \"434ae256-23eb-408f-8b81-1956600d3c2f\" (UID: \"434ae256-23eb-408f-8b81-1956600d3c2f\") " Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.958591 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drbw9\" (UniqueName: \"kubernetes.io/projected/434ae256-23eb-408f-8b81-1956600d3c2f-kube-api-access-drbw9\") pod \"434ae256-23eb-408f-8b81-1956600d3c2f\" (UID: \"434ae256-23eb-408f-8b81-1956600d3c2f\") " Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.958617 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/434ae256-23eb-408f-8b81-1956600d3c2f-config-data\") pod \"434ae256-23eb-408f-8b81-1956600d3c2f\" (UID: \"434ae256-23eb-408f-8b81-1956600d3c2f\") " Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.959220 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11d4f9aa-dff6-4df0-9f6e-ead4097006a0-logs\") pod \"nova-metadata-0\" (UID: \"11d4f9aa-dff6-4df0-9f6e-ead4097006a0\") " pod="openstack/nova-metadata-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.959270 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/11d4f9aa-dff6-4df0-9f6e-ead4097006a0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"11d4f9aa-dff6-4df0-9f6e-ead4097006a0\") " pod="openstack/nova-metadata-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.959321 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvdjr\" (UniqueName: \"kubernetes.io/projected/f9ec0888-b017-49bb-b11e-feb543a1db7e-kube-api-access-rvdjr\") pod \"nova-api-0\" (UID: \"f9ec0888-b017-49bb-b11e-feb543a1db7e\") " pod="openstack/nova-api-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.959394 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s84dg\" (UniqueName: \"kubernetes.io/projected/11d4f9aa-dff6-4df0-9f6e-ead4097006a0-kube-api-access-s84dg\") pod \"nova-metadata-0\" (UID: \"11d4f9aa-dff6-4df0-9f6e-ead4097006a0\") " pod="openstack/nova-metadata-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.959469 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ec0888-b017-49bb-b11e-feb543a1db7e-public-tls-certs\") pod \"nova-api-0\" (UID: \"f9ec0888-b017-49bb-b11e-feb543a1db7e\") " pod="openstack/nova-api-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.959518 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11d4f9aa-dff6-4df0-9f6e-ead4097006a0-config-data\") pod \"nova-metadata-0\" (UID: \"11d4f9aa-dff6-4df0-9f6e-ead4097006a0\") " pod="openstack/nova-metadata-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.959555 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ec0888-b017-49bb-b11e-feb543a1db7e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f9ec0888-b017-49bb-b11e-feb543a1db7e\") " pod="openstack/nova-api-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.959573 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ec0888-b017-49bb-b11e-feb543a1db7e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f9ec0888-b017-49bb-b11e-feb543a1db7e\") " pod="openstack/nova-api-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.959614 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9ec0888-b017-49bb-b11e-feb543a1db7e-config-data\") pod \"nova-api-0\" (UID: \"f9ec0888-b017-49bb-b11e-feb543a1db7e\") " pod="openstack/nova-api-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.959637 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11d4f9aa-dff6-4df0-9f6e-ead4097006a0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"11d4f9aa-dff6-4df0-9f6e-ead4097006a0\") " pod="openstack/nova-metadata-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.959658 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9ec0888-b017-49bb-b11e-feb543a1db7e-logs\") pod \"nova-api-0\" (UID: \"f9ec0888-b017-49bb-b11e-feb543a1db7e\") " pod="openstack/nova-api-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.960211 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9ec0888-b017-49bb-b11e-feb543a1db7e-logs\") pod \"nova-api-0\" (UID: \"f9ec0888-b017-49bb-b11e-feb543a1db7e\") " pod="openstack/nova-api-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.964266 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11d4f9aa-dff6-4df0-9f6e-ead4097006a0-logs\") pod \"nova-metadata-0\" (UID: \"11d4f9aa-dff6-4df0-9f6e-ead4097006a0\") " pod="openstack/nova-metadata-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.967550 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/11d4f9aa-dff6-4df0-9f6e-ead4097006a0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"11d4f9aa-dff6-4df0-9f6e-ead4097006a0\") " pod="openstack/nova-metadata-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.968831 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ec0888-b017-49bb-b11e-feb543a1db7e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f9ec0888-b017-49bb-b11e-feb543a1db7e\") " pod="openstack/nova-api-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.968866 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ec0888-b017-49bb-b11e-feb543a1db7e-public-tls-certs\") pod \"nova-api-0\" (UID: \"f9ec0888-b017-49bb-b11e-feb543a1db7e\") " pod="openstack/nova-api-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.969245 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11d4f9aa-dff6-4df0-9f6e-ead4097006a0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"11d4f9aa-dff6-4df0-9f6e-ead4097006a0\") " pod="openstack/nova-metadata-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.969435 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/434ae256-23eb-408f-8b81-1956600d3c2f-kube-api-access-drbw9" (OuterVolumeSpecName: "kube-api-access-drbw9") pod "434ae256-23eb-408f-8b81-1956600d3c2f" (UID: "434ae256-23eb-408f-8b81-1956600d3c2f"). InnerVolumeSpecName "kube-api-access-drbw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.969585 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9ec0888-b017-49bb-b11e-feb543a1db7e-config-data\") pod \"nova-api-0\" (UID: \"f9ec0888-b017-49bb-b11e-feb543a1db7e\") " pod="openstack/nova-api-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.974521 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ec0888-b017-49bb-b11e-feb543a1db7e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f9ec0888-b017-49bb-b11e-feb543a1db7e\") " pod="openstack/nova-api-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.978263 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11d4f9aa-dff6-4df0-9f6e-ead4097006a0-config-data\") pod \"nova-metadata-0\" (UID: \"11d4f9aa-dff6-4df0-9f6e-ead4097006a0\") " pod="openstack/nova-metadata-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.982239 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvdjr\" (UniqueName: \"kubernetes.io/projected/f9ec0888-b017-49bb-b11e-feb543a1db7e-kube-api-access-rvdjr\") pod \"nova-api-0\" (UID: \"f9ec0888-b017-49bb-b11e-feb543a1db7e\") " pod="openstack/nova-api-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.985158 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s84dg\" (UniqueName: \"kubernetes.io/projected/11d4f9aa-dff6-4df0-9f6e-ead4097006a0-kube-api-access-s84dg\") pod \"nova-metadata-0\" (UID: \"11d4f9aa-dff6-4df0-9f6e-ead4097006a0\") " pod="openstack/nova-metadata-0" Dec 10 12:38:36 crc kubenswrapper[4689]: I1210 12:38:36.998814 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/434ae256-23eb-408f-8b81-1956600d3c2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "434ae256-23eb-408f-8b81-1956600d3c2f" (UID: "434ae256-23eb-408f-8b81-1956600d3c2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:38:37 crc kubenswrapper[4689]: I1210 12:38:37.015622 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/434ae256-23eb-408f-8b81-1956600d3c2f-config-data" (OuterVolumeSpecName: "config-data") pod "434ae256-23eb-408f-8b81-1956600d3c2f" (UID: "434ae256-23eb-408f-8b81-1956600d3c2f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:38:37 crc kubenswrapper[4689]: I1210 12:38:37.061628 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/434ae256-23eb-408f-8b81-1956600d3c2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:37 crc kubenswrapper[4689]: I1210 12:38:37.061910 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drbw9\" (UniqueName: \"kubernetes.io/projected/434ae256-23eb-408f-8b81-1956600d3c2f-kube-api-access-drbw9\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:37 crc kubenswrapper[4689]: I1210 12:38:37.062036 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/434ae256-23eb-408f-8b81-1956600d3c2f-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:37 crc kubenswrapper[4689]: I1210 12:38:37.212114 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 12:38:37 crc kubenswrapper[4689]: I1210 12:38:37.228895 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 12:38:37 crc kubenswrapper[4689]: I1210 12:38:37.723305 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 12:38:37 crc kubenswrapper[4689]: I1210 12:38:37.744570 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 12:38:37 crc kubenswrapper[4689]: I1210 12:38:37.744574 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"434ae256-23eb-408f-8b81-1956600d3c2f","Type":"ContainerDied","Data":"a39e8c2e4650c114c1a2c1e50a52b07e36707f907cd82d823f5995d50bc8c733"} Dec 10 12:38:37 crc kubenswrapper[4689]: I1210 12:38:37.744622 4689 scope.go:117] "RemoveContainer" containerID="c3accbd3e51c14c887c513588c356294ea81bde9df9cc45cfaec617749468b90" Dec 10 12:38:37 crc kubenswrapper[4689]: I1210 12:38:37.747960 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"11d4f9aa-dff6-4df0-9f6e-ead4097006a0","Type":"ContainerStarted","Data":"290c769f4d76bc18b98cb3eea138b7ac2658633841ccda7d08fd268e5580dd51"} Dec 10 12:38:37 crc kubenswrapper[4689]: I1210 12:38:37.793448 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 12:38:37 crc kubenswrapper[4689]: I1210 12:38:37.812676 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 12:38:37 crc kubenswrapper[4689]: I1210 12:38:37.846402 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 12:38:37 crc kubenswrapper[4689]: E1210 12:38:37.846961 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="434ae256-23eb-408f-8b81-1956600d3c2f" containerName="nova-scheduler-scheduler" Dec 10 12:38:37 crc kubenswrapper[4689]: I1210 12:38:37.847003 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="434ae256-23eb-408f-8b81-1956600d3c2f" containerName="nova-scheduler-scheduler" Dec 10 12:38:37 crc kubenswrapper[4689]: I1210 12:38:37.847243 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="434ae256-23eb-408f-8b81-1956600d3c2f" containerName="nova-scheduler-scheduler" Dec 10 12:38:37 crc kubenswrapper[4689]: I1210 12:38:37.848044 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 12:38:37 crc kubenswrapper[4689]: I1210 12:38:37.852589 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 10 12:38:37 crc kubenswrapper[4689]: I1210 12:38:37.873201 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 12:38:37 crc kubenswrapper[4689]: I1210 12:38:37.876858 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11566d27-371a-412a-a9f0-b147b642f173-config-data\") pod \"nova-scheduler-0\" (UID: \"11566d27-371a-412a-a9f0-b147b642f173\") " pod="openstack/nova-scheduler-0" Dec 10 12:38:37 crc kubenswrapper[4689]: I1210 12:38:37.876990 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11566d27-371a-412a-a9f0-b147b642f173-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"11566d27-371a-412a-a9f0-b147b642f173\") " pod="openstack/nova-scheduler-0" Dec 10 12:38:37 crc kubenswrapper[4689]: I1210 12:38:37.877061 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnt88\" (UniqueName: \"kubernetes.io/projected/11566d27-371a-412a-a9f0-b147b642f173-kube-api-access-pnt88\") pod \"nova-scheduler-0\" (UID: \"11566d27-371a-412a-a9f0-b147b642f173\") " pod="openstack/nova-scheduler-0" Dec 10 12:38:37 crc kubenswrapper[4689]: I1210 12:38:37.882397 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 10 12:38:37 crc kubenswrapper[4689]: I1210 12:38:37.978908 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11566d27-371a-412a-a9f0-b147b642f173-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"11566d27-371a-412a-a9f0-b147b642f173\") " pod="openstack/nova-scheduler-0" Dec 10 12:38:37 crc kubenswrapper[4689]: I1210 12:38:37.979063 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnt88\" (UniqueName: \"kubernetes.io/projected/11566d27-371a-412a-a9f0-b147b642f173-kube-api-access-pnt88\") pod \"nova-scheduler-0\" (UID: \"11566d27-371a-412a-a9f0-b147b642f173\") " pod="openstack/nova-scheduler-0" Dec 10 12:38:37 crc kubenswrapper[4689]: I1210 12:38:37.979122 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11566d27-371a-412a-a9f0-b147b642f173-config-data\") pod \"nova-scheduler-0\" (UID: \"11566d27-371a-412a-a9f0-b147b642f173\") " pod="openstack/nova-scheduler-0" Dec 10 12:38:37 crc kubenswrapper[4689]: I1210 12:38:37.985642 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11566d27-371a-412a-a9f0-b147b642f173-config-data\") pod \"nova-scheduler-0\" (UID: \"11566d27-371a-412a-a9f0-b147b642f173\") " pod="openstack/nova-scheduler-0" Dec 10 12:38:37 crc kubenswrapper[4689]: I1210 12:38:37.985721 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11566d27-371a-412a-a9f0-b147b642f173-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"11566d27-371a-412a-a9f0-b147b642f173\") " pod="openstack/nova-scheduler-0" Dec 10 12:38:37 crc kubenswrapper[4689]: I1210 12:38:37.994757 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnt88\" (UniqueName: \"kubernetes.io/projected/11566d27-371a-412a-a9f0-b147b642f173-kube-api-access-pnt88\") pod \"nova-scheduler-0\" (UID: \"11566d27-371a-412a-a9f0-b147b642f173\") " pod="openstack/nova-scheduler-0" Dec 10 12:38:38 crc kubenswrapper[4689]: I1210 12:38:38.051083 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 12:38:38 crc kubenswrapper[4689]: I1210 12:38:38.510486 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="434ae256-23eb-408f-8b81-1956600d3c2f" path="/var/lib/kubelet/pods/434ae256-23eb-408f-8b81-1956600d3c2f/volumes" Dec 10 12:38:38 crc kubenswrapper[4689]: I1210 12:38:38.511275 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76352913-e683-46d9-962a-74510c4f8dd6" path="/var/lib/kubelet/pods/76352913-e683-46d9-962a-74510c4f8dd6/volumes" Dec 10 12:38:38 crc kubenswrapper[4689]: I1210 12:38:38.511899 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff93cb1f-aee9-49d9-b6fd-ca96103ff881" path="/var/lib/kubelet/pods/ff93cb1f-aee9-49d9-b6fd-ca96103ff881/volumes" Dec 10 12:38:38 crc kubenswrapper[4689]: W1210 12:38:38.512396 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11566d27_371a_412a_a9f0_b147b642f173.slice/crio-b4c8628309904e892bf69855e90e42ce21f3cd1dca2ddbd5296f11e12ff2bda4 WatchSource:0}: Error finding container b4c8628309904e892bf69855e90e42ce21f3cd1dca2ddbd5296f11e12ff2bda4: Status 404 returned error can't find the container with id b4c8628309904e892bf69855e90e42ce21f3cd1dca2ddbd5296f11e12ff2bda4 Dec 10 12:38:38 crc kubenswrapper[4689]: I1210 12:38:38.512990 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 12:38:38 crc kubenswrapper[4689]: I1210 12:38:38.778564 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"11d4f9aa-dff6-4df0-9f6e-ead4097006a0","Type":"ContainerStarted","Data":"4f1e4b980475599905ae5683fb7129901d7e39a4aa5e91267a3fca91d289c62c"} Dec 10 12:38:38 crc kubenswrapper[4689]: I1210 12:38:38.778904 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"11d4f9aa-dff6-4df0-9f6e-ead4097006a0","Type":"ContainerStarted","Data":"f7ff17c85620a1a16da61ae927b96dc8cf7d9f3b5ba29334d617d9948d24a752"} Dec 10 12:38:38 crc kubenswrapper[4689]: I1210 12:38:38.786579 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"11566d27-371a-412a-a9f0-b147b642f173","Type":"ContainerStarted","Data":"b4c8628309904e892bf69855e90e42ce21f3cd1dca2ddbd5296f11e12ff2bda4"} Dec 10 12:38:38 crc kubenswrapper[4689]: I1210 12:38:38.796578 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9ec0888-b017-49bb-b11e-feb543a1db7e","Type":"ContainerStarted","Data":"f84fb558154a5fe0960a197981f32aed6bb83ede6aa0dc051d6b35d2f725c505"} Dec 10 12:38:38 crc kubenswrapper[4689]: I1210 12:38:38.796621 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9ec0888-b017-49bb-b11e-feb543a1db7e","Type":"ContainerStarted","Data":"d47b240d0ae27039cd2cf939580200c93de8a70aa0ecf819ed99d04eea1a6e1f"} Dec 10 12:38:38 crc kubenswrapper[4689]: I1210 12:38:38.796631 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9ec0888-b017-49bb-b11e-feb543a1db7e","Type":"ContainerStarted","Data":"49dba3e02e779ef13d81187e89d6364e5cc25a97c65de1fb54403ce229cf3c71"} Dec 10 12:38:38 crc kubenswrapper[4689]: I1210 12:38:38.813457 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.813431449 podStartE2EDuration="2.813431449s" podCreationTimestamp="2025-12-10 12:38:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:38:38.80780985 +0000 UTC m=+1386.595890988" watchObservedRunningTime="2025-12-10 12:38:38.813431449 +0000 UTC m=+1386.601512587" Dec 10 12:38:38 crc kubenswrapper[4689]: I1210 12:38:38.883002 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.882960679 podStartE2EDuration="2.882960679s" podCreationTimestamp="2025-12-10 12:38:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:38:38.844988239 +0000 UTC m=+1386.633069377" watchObservedRunningTime="2025-12-10 12:38:38.882960679 +0000 UTC m=+1386.671041817" Dec 10 12:38:39 crc kubenswrapper[4689]: I1210 12:38:39.806480 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"11566d27-371a-412a-a9f0-b147b642f173","Type":"ContainerStarted","Data":"a1c59ded30675ba76a25892d038725b8e6af5209098ee719d9a8a7dab147408f"} Dec 10 12:38:39 crc kubenswrapper[4689]: I1210 12:38:39.823962 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.8239467830000002 podStartE2EDuration="2.823946783s" podCreationTimestamp="2025-12-10 12:38:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:38:39.823154494 +0000 UTC m=+1387.611235632" watchObservedRunningTime="2025-12-10 12:38:39.823946783 +0000 UTC m=+1387.612027931" Dec 10 12:38:42 crc kubenswrapper[4689]: I1210 12:38:42.212567 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 10 12:38:42 crc kubenswrapper[4689]: I1210 12:38:42.213196 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 10 12:38:43 crc kubenswrapper[4689]: I1210 12:38:43.051626 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 10 12:38:47 crc kubenswrapper[4689]: I1210 12:38:47.212413 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 10 12:38:47 crc kubenswrapper[4689]: I1210 12:38:47.213451 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 10 12:38:47 crc kubenswrapper[4689]: I1210 12:38:47.229946 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 10 12:38:47 crc kubenswrapper[4689]: I1210 12:38:47.230060 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 10 12:38:47 crc kubenswrapper[4689]: I1210 12:38:47.857337 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 10 12:38:48 crc kubenswrapper[4689]: I1210 12:38:48.051598 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 10 12:38:48 crc kubenswrapper[4689]: I1210 12:38:48.091902 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 10 12:38:48 crc kubenswrapper[4689]: I1210 12:38:48.230319 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="11d4f9aa-dff6-4df0-9f6e-ead4097006a0" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 12:38:48 crc kubenswrapper[4689]: I1210 12:38:48.230335 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="11d4f9aa-dff6-4df0-9f6e-ead4097006a0" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 12:38:48 crc kubenswrapper[4689]: I1210 12:38:48.278419 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f9ec0888-b017-49bb-b11e-feb543a1db7e" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 12:38:48 crc kubenswrapper[4689]: I1210 12:38:48.278732 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f9ec0888-b017-49bb-b11e-feb543a1db7e" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 12:38:48 crc kubenswrapper[4689]: I1210 12:38:48.931919 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 10 12:38:51 crc kubenswrapper[4689]: I1210 12:38:51.204090 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 12:38:51 crc kubenswrapper[4689]: I1210 12:38:51.204560 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="b45272bc-38b8-4fa2-8710-57da25792b73" containerName="kube-state-metrics" containerID="cri-o://dafae15abcc69d9d05c30417a2fc9025c74b56325238c67067b9e388e2efa6a9" gracePeriod=30 Dec 10 12:38:51 crc kubenswrapper[4689]: I1210 12:38:51.771079 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 10 12:38:51 crc kubenswrapper[4689]: I1210 12:38:51.800267 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnnnq\" (UniqueName: \"kubernetes.io/projected/b45272bc-38b8-4fa2-8710-57da25792b73-kube-api-access-lnnnq\") pod \"b45272bc-38b8-4fa2-8710-57da25792b73\" (UID: \"b45272bc-38b8-4fa2-8710-57da25792b73\") " Dec 10 12:38:51 crc kubenswrapper[4689]: I1210 12:38:51.808166 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b45272bc-38b8-4fa2-8710-57da25792b73-kube-api-access-lnnnq" (OuterVolumeSpecName: "kube-api-access-lnnnq") pod "b45272bc-38b8-4fa2-8710-57da25792b73" (UID: "b45272bc-38b8-4fa2-8710-57da25792b73"). InnerVolumeSpecName "kube-api-access-lnnnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:38:51 crc kubenswrapper[4689]: I1210 12:38:51.902438 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnnnq\" (UniqueName: \"kubernetes.io/projected/b45272bc-38b8-4fa2-8710-57da25792b73-kube-api-access-lnnnq\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:51 crc kubenswrapper[4689]: I1210 12:38:51.930717 4689 generic.go:334] "Generic (PLEG): container finished" podID="b45272bc-38b8-4fa2-8710-57da25792b73" containerID="dafae15abcc69d9d05c30417a2fc9025c74b56325238c67067b9e388e2efa6a9" exitCode=2 Dec 10 12:38:51 crc kubenswrapper[4689]: I1210 12:38:51.930767 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b45272bc-38b8-4fa2-8710-57da25792b73","Type":"ContainerDied","Data":"dafae15abcc69d9d05c30417a2fc9025c74b56325238c67067b9e388e2efa6a9"} Dec 10 12:38:51 crc kubenswrapper[4689]: I1210 12:38:51.930801 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b45272bc-38b8-4fa2-8710-57da25792b73","Type":"ContainerDied","Data":"c6f2db6e83ef659d996806762a65b30c10c7db315915a4f75245eb7a40d3af49"} Dec 10 12:38:51 crc kubenswrapper[4689]: I1210 12:38:51.930815 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 10 12:38:51 crc kubenswrapper[4689]: I1210 12:38:51.930824 4689 scope.go:117] "RemoveContainer" containerID="dafae15abcc69d9d05c30417a2fc9025c74b56325238c67067b9e388e2efa6a9" Dec 10 12:38:51 crc kubenswrapper[4689]: I1210 12:38:51.960719 4689 scope.go:117] "RemoveContainer" containerID="dafae15abcc69d9d05c30417a2fc9025c74b56325238c67067b9e388e2efa6a9" Dec 10 12:38:51 crc kubenswrapper[4689]: E1210 12:38:51.961238 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dafae15abcc69d9d05c30417a2fc9025c74b56325238c67067b9e388e2efa6a9\": container with ID starting with dafae15abcc69d9d05c30417a2fc9025c74b56325238c67067b9e388e2efa6a9 not found: ID does not exist" containerID="dafae15abcc69d9d05c30417a2fc9025c74b56325238c67067b9e388e2efa6a9" Dec 10 12:38:51 crc kubenswrapper[4689]: I1210 12:38:51.961284 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dafae15abcc69d9d05c30417a2fc9025c74b56325238c67067b9e388e2efa6a9"} err="failed to get container status \"dafae15abcc69d9d05c30417a2fc9025c74b56325238c67067b9e388e2efa6a9\": rpc error: code = NotFound desc = could not find container \"dafae15abcc69d9d05c30417a2fc9025c74b56325238c67067b9e388e2efa6a9\": container with ID starting with dafae15abcc69d9d05c30417a2fc9025c74b56325238c67067b9e388e2efa6a9 not found: ID does not exist" Dec 10 12:38:51 crc kubenswrapper[4689]: I1210 12:38:51.972183 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 12:38:51 crc kubenswrapper[4689]: I1210 12:38:51.977422 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 12:38:52 crc kubenswrapper[4689]: I1210 12:38:52.001295 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 12:38:52 crc kubenswrapper[4689]: E1210 12:38:52.001807 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45272bc-38b8-4fa2-8710-57da25792b73" containerName="kube-state-metrics" Dec 10 12:38:52 crc kubenswrapper[4689]: I1210 12:38:52.001831 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45272bc-38b8-4fa2-8710-57da25792b73" containerName="kube-state-metrics" Dec 10 12:38:52 crc kubenswrapper[4689]: I1210 12:38:52.002062 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45272bc-38b8-4fa2-8710-57da25792b73" containerName="kube-state-metrics" Dec 10 12:38:52 crc kubenswrapper[4689]: I1210 12:38:52.002741 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 10 12:38:52 crc kubenswrapper[4689]: I1210 12:38:52.019539 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 10 12:38:52 crc kubenswrapper[4689]: I1210 12:38:52.019779 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 10 12:38:52 crc kubenswrapper[4689]: I1210 12:38:52.025790 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 12:38:52 crc kubenswrapper[4689]: I1210 12:38:52.105652 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e13ec23-1267-498e-9d74-fcfc8aee69b6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9e13ec23-1267-498e-9d74-fcfc8aee69b6\") " pod="openstack/kube-state-metrics-0" Dec 10 12:38:52 crc kubenswrapper[4689]: I1210 12:38:52.105739 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e13ec23-1267-498e-9d74-fcfc8aee69b6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9e13ec23-1267-498e-9d74-fcfc8aee69b6\") " pod="openstack/kube-state-metrics-0" Dec 10 12:38:52 crc kubenswrapper[4689]: I1210 12:38:52.105811 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4ldp\" (UniqueName: \"kubernetes.io/projected/9e13ec23-1267-498e-9d74-fcfc8aee69b6-kube-api-access-q4ldp\") pod \"kube-state-metrics-0\" (UID: \"9e13ec23-1267-498e-9d74-fcfc8aee69b6\") " pod="openstack/kube-state-metrics-0" Dec 10 12:38:52 crc kubenswrapper[4689]: I1210 12:38:52.105844 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9e13ec23-1267-498e-9d74-fcfc8aee69b6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9e13ec23-1267-498e-9d74-fcfc8aee69b6\") " pod="openstack/kube-state-metrics-0" Dec 10 12:38:52 crc kubenswrapper[4689]: I1210 12:38:52.206990 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e13ec23-1267-498e-9d74-fcfc8aee69b6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9e13ec23-1267-498e-9d74-fcfc8aee69b6\") " pod="openstack/kube-state-metrics-0" Dec 10 12:38:52 crc kubenswrapper[4689]: I1210 12:38:52.207054 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e13ec23-1267-498e-9d74-fcfc8aee69b6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9e13ec23-1267-498e-9d74-fcfc8aee69b6\") " pod="openstack/kube-state-metrics-0" Dec 10 12:38:52 crc kubenswrapper[4689]: I1210 12:38:52.207121 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4ldp\" (UniqueName: \"kubernetes.io/projected/9e13ec23-1267-498e-9d74-fcfc8aee69b6-kube-api-access-q4ldp\") pod \"kube-state-metrics-0\" (UID: \"9e13ec23-1267-498e-9d74-fcfc8aee69b6\") " pod="openstack/kube-state-metrics-0" Dec 10 12:38:52 crc kubenswrapper[4689]: I1210 12:38:52.207160 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9e13ec23-1267-498e-9d74-fcfc8aee69b6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9e13ec23-1267-498e-9d74-fcfc8aee69b6\") " pod="openstack/kube-state-metrics-0" Dec 10 12:38:52 crc kubenswrapper[4689]: I1210 12:38:52.211368 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e13ec23-1267-498e-9d74-fcfc8aee69b6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9e13ec23-1267-498e-9d74-fcfc8aee69b6\") " pod="openstack/kube-state-metrics-0" Dec 10 12:38:52 crc kubenswrapper[4689]: I1210 12:38:52.211600 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e13ec23-1267-498e-9d74-fcfc8aee69b6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9e13ec23-1267-498e-9d74-fcfc8aee69b6\") " pod="openstack/kube-state-metrics-0" Dec 10 12:38:52 crc kubenswrapper[4689]: I1210 12:38:52.220435 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9e13ec23-1267-498e-9d74-fcfc8aee69b6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9e13ec23-1267-498e-9d74-fcfc8aee69b6\") " pod="openstack/kube-state-metrics-0" Dec 10 12:38:52 crc kubenswrapper[4689]: I1210 12:38:52.226433 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4ldp\" (UniqueName: \"kubernetes.io/projected/9e13ec23-1267-498e-9d74-fcfc8aee69b6-kube-api-access-q4ldp\") pod \"kube-state-metrics-0\" (UID: \"9e13ec23-1267-498e-9d74-fcfc8aee69b6\") " pod="openstack/kube-state-metrics-0" Dec 10 12:38:52 crc kubenswrapper[4689]: I1210 12:38:52.336706 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 10 12:38:52 crc kubenswrapper[4689]: I1210 12:38:52.525616 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b45272bc-38b8-4fa2-8710-57da25792b73" path="/var/lib/kubelet/pods/b45272bc-38b8-4fa2-8710-57da25792b73/volumes" Dec 10 12:38:52 crc kubenswrapper[4689]: W1210 12:38:52.630751 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e13ec23_1267_498e_9d74_fcfc8aee69b6.slice/crio-97a2842c044fc7333c99e14f2f73e1d683ce937fb8427182c92e31f2c4c7ca5d WatchSource:0}: Error finding container 97a2842c044fc7333c99e14f2f73e1d683ce937fb8427182c92e31f2c4c7ca5d: Status 404 returned error can't find the container with id 97a2842c044fc7333c99e14f2f73e1d683ce937fb8427182c92e31f2c4c7ca5d Dec 10 12:38:52 crc kubenswrapper[4689]: I1210 12:38:52.631687 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 12:38:52 crc kubenswrapper[4689]: I1210 12:38:52.942110 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9e13ec23-1267-498e-9d74-fcfc8aee69b6","Type":"ContainerStarted","Data":"97a2842c044fc7333c99e14f2f73e1d683ce937fb8427182c92e31f2c4c7ca5d"} Dec 10 12:38:53 crc kubenswrapper[4689]: I1210 12:38:53.012259 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:38:53 crc kubenswrapper[4689]: I1210 12:38:53.012665 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4afb5506-86b9-45fc-9a95-a0336b141fdf" containerName="sg-core" containerID="cri-o://fe76e003eb9df787fc8f5c7305367b5f2944cb351553c59f3e08cb941a45de8d" gracePeriod=30 Dec 10 12:38:53 crc kubenswrapper[4689]: I1210 12:38:53.012677 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4afb5506-86b9-45fc-9a95-a0336b141fdf" containerName="ceilometer-notification-agent" containerID="cri-o://c6d4e0648bb894bd1439e384a23e5c1d30fc6435f3b1b7f847fbfc73e888ecf7" gracePeriod=30 Dec 10 12:38:53 crc kubenswrapper[4689]: I1210 12:38:53.012676 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4afb5506-86b9-45fc-9a95-a0336b141fdf" containerName="proxy-httpd" containerID="cri-o://606c1f90985c57574aa73e9820a13f30555040f6c2438c9450cbad7c37324ea3" gracePeriod=30 Dec 10 12:38:53 crc kubenswrapper[4689]: I1210 12:38:53.012885 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4afb5506-86b9-45fc-9a95-a0336b141fdf" containerName="ceilometer-central-agent" containerID="cri-o://71366f32f1e3594b2d26845c8b2c3ad5847b3af34092cbdf610384ffcc7994d9" gracePeriod=30 Dec 10 12:38:53 crc kubenswrapper[4689]: I1210 12:38:53.956528 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9e13ec23-1267-498e-9d74-fcfc8aee69b6","Type":"ContainerStarted","Data":"d145102d33a9a40ac5e9353e06e1f38f0f86af3cf082ff5f940af4aa81e9b805"} Dec 10 12:38:53 crc kubenswrapper[4689]: I1210 12:38:53.956910 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 10 12:38:53 crc kubenswrapper[4689]: I1210 12:38:53.963404 4689 generic.go:334] "Generic (PLEG): container finished" podID="4afb5506-86b9-45fc-9a95-a0336b141fdf" containerID="606c1f90985c57574aa73e9820a13f30555040f6c2438c9450cbad7c37324ea3" exitCode=0 Dec 10 12:38:53 crc kubenswrapper[4689]: I1210 12:38:53.964405 4689 generic.go:334] "Generic (PLEG): container finished" podID="4afb5506-86b9-45fc-9a95-a0336b141fdf" containerID="fe76e003eb9df787fc8f5c7305367b5f2944cb351553c59f3e08cb941a45de8d" exitCode=2 Dec 10 12:38:53 crc kubenswrapper[4689]: I1210 12:38:53.964458 4689 generic.go:334] "Generic (PLEG): container finished" podID="4afb5506-86b9-45fc-9a95-a0336b141fdf" containerID="71366f32f1e3594b2d26845c8b2c3ad5847b3af34092cbdf610384ffcc7994d9" exitCode=0 Dec 10 12:38:53 crc kubenswrapper[4689]: I1210 12:38:53.964499 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4afb5506-86b9-45fc-9a95-a0336b141fdf","Type":"ContainerDied","Data":"606c1f90985c57574aa73e9820a13f30555040f6c2438c9450cbad7c37324ea3"} Dec 10 12:38:53 crc kubenswrapper[4689]: I1210 12:38:53.964538 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4afb5506-86b9-45fc-9a95-a0336b141fdf","Type":"ContainerDied","Data":"fe76e003eb9df787fc8f5c7305367b5f2944cb351553c59f3e08cb941a45de8d"} Dec 10 12:38:53 crc kubenswrapper[4689]: I1210 12:38:53.964559 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4afb5506-86b9-45fc-9a95-a0336b141fdf","Type":"ContainerDied","Data":"71366f32f1e3594b2d26845c8b2c3ad5847b3af34092cbdf610384ffcc7994d9"} Dec 10 12:38:53 crc kubenswrapper[4689]: I1210 12:38:53.976835 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.50230205 podStartE2EDuration="2.976818622s" podCreationTimestamp="2025-12-10 12:38:51 +0000 UTC" firstStartedPulling="2025-12-10 12:38:52.633503941 +0000 UTC m=+1400.421585089" lastFinishedPulling="2025-12-10 12:38:53.108020523 +0000 UTC m=+1400.896101661" observedRunningTime="2025-12-10 12:38:53.972902954 +0000 UTC m=+1401.760984092" watchObservedRunningTime="2025-12-10 12:38:53.976818622 +0000 UTC m=+1401.764899750" Dec 10 12:38:56 crc kubenswrapper[4689]: I1210 12:38:56.559639 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="b45272bc-38b8-4fa2-8710-57da25792b73" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 12:38:56 crc kubenswrapper[4689]: I1210 12:38:56.999104 4689 generic.go:334] "Generic (PLEG): container finished" podID="4afb5506-86b9-45fc-9a95-a0336b141fdf" containerID="c6d4e0648bb894bd1439e384a23e5c1d30fc6435f3b1b7f847fbfc73e888ecf7" exitCode=0 Dec 10 12:38:56 crc kubenswrapper[4689]: I1210 12:38:56.999170 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4afb5506-86b9-45fc-9a95-a0336b141fdf","Type":"ContainerDied","Data":"c6d4e0648bb894bd1439e384a23e5c1d30fc6435f3b1b7f847fbfc73e888ecf7"} Dec 10 12:38:57 crc kubenswrapper[4689]: I1210 12:38:57.223960 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 10 12:38:57 crc kubenswrapper[4689]: I1210 12:38:57.224393 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 10 12:38:57 crc kubenswrapper[4689]: I1210 12:38:57.231401 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 10 12:38:57 crc kubenswrapper[4689]: I1210 12:38:57.234132 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 10 12:38:57 crc kubenswrapper[4689]: I1210 12:38:57.239071 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 10 12:38:57 crc kubenswrapper[4689]: I1210 12:38:57.239822 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 10 12:38:57 crc kubenswrapper[4689]: I1210 12:38:57.244456 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 10 12:38:57 crc kubenswrapper[4689]: I1210 12:38:57.249400 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 10 12:38:57 crc kubenswrapper[4689]: I1210 12:38:57.433829 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 12:38:57 crc kubenswrapper[4689]: I1210 12:38:57.612790 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4afb5506-86b9-45fc-9a95-a0336b141fdf-run-httpd\") pod \"4afb5506-86b9-45fc-9a95-a0336b141fdf\" (UID: \"4afb5506-86b9-45fc-9a95-a0336b141fdf\") " Dec 10 12:38:57 crc kubenswrapper[4689]: I1210 12:38:57.612886 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4afb5506-86b9-45fc-9a95-a0336b141fdf-sg-core-conf-yaml\") pod \"4afb5506-86b9-45fc-9a95-a0336b141fdf\" (UID: \"4afb5506-86b9-45fc-9a95-a0336b141fdf\") " Dec 10 12:38:57 crc kubenswrapper[4689]: I1210 12:38:57.613003 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4afb5506-86b9-45fc-9a95-a0336b141fdf-config-data\") pod \"4afb5506-86b9-45fc-9a95-a0336b141fdf\" (UID: \"4afb5506-86b9-45fc-9a95-a0336b141fdf\") " Dec 10 12:38:57 crc kubenswrapper[4689]: I1210 12:38:57.613044 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4afb5506-86b9-45fc-9a95-a0336b141fdf-combined-ca-bundle\") pod \"4afb5506-86b9-45fc-9a95-a0336b141fdf\" (UID: \"4afb5506-86b9-45fc-9a95-a0336b141fdf\") " Dec 10 12:38:57 crc kubenswrapper[4689]: I1210 12:38:57.613105 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4afb5506-86b9-45fc-9a95-a0336b141fdf-log-httpd\") pod \"4afb5506-86b9-45fc-9a95-a0336b141fdf\" (UID: \"4afb5506-86b9-45fc-9a95-a0336b141fdf\") " Dec 10 12:38:57 crc kubenswrapper[4689]: I1210 12:38:57.613149 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df75z\" (UniqueName: \"kubernetes.io/projected/4afb5506-86b9-45fc-9a95-a0336b141fdf-kube-api-access-df75z\") pod \"4afb5506-86b9-45fc-9a95-a0336b141fdf\" (UID: \"4afb5506-86b9-45fc-9a95-a0336b141fdf\") " Dec 10 12:38:57 crc kubenswrapper[4689]: I1210 12:38:57.613191 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4afb5506-86b9-45fc-9a95-a0336b141fdf-scripts\") pod \"4afb5506-86b9-45fc-9a95-a0336b141fdf\" (UID: \"4afb5506-86b9-45fc-9a95-a0336b141fdf\") " Dec 10 12:38:57 crc kubenswrapper[4689]: I1210 12:38:57.613961 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4afb5506-86b9-45fc-9a95-a0336b141fdf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4afb5506-86b9-45fc-9a95-a0336b141fdf" (UID: "4afb5506-86b9-45fc-9a95-a0336b141fdf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:38:57 crc kubenswrapper[4689]: I1210 12:38:57.614183 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4afb5506-86b9-45fc-9a95-a0336b141fdf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4afb5506-86b9-45fc-9a95-a0336b141fdf" (UID: "4afb5506-86b9-45fc-9a95-a0336b141fdf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:38:57 crc kubenswrapper[4689]: I1210 12:38:57.618483 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4afb5506-86b9-45fc-9a95-a0336b141fdf-scripts" (OuterVolumeSpecName: "scripts") pod "4afb5506-86b9-45fc-9a95-a0336b141fdf" (UID: "4afb5506-86b9-45fc-9a95-a0336b141fdf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:38:57 crc kubenswrapper[4689]: I1210 12:38:57.625660 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4afb5506-86b9-45fc-9a95-a0336b141fdf-kube-api-access-df75z" (OuterVolumeSpecName: "kube-api-access-df75z") pod "4afb5506-86b9-45fc-9a95-a0336b141fdf" (UID: "4afb5506-86b9-45fc-9a95-a0336b141fdf"). InnerVolumeSpecName "kube-api-access-df75z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:38:57 crc kubenswrapper[4689]: I1210 12:38:57.653149 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4afb5506-86b9-45fc-9a95-a0336b141fdf-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4afb5506-86b9-45fc-9a95-a0336b141fdf" (UID: "4afb5506-86b9-45fc-9a95-a0336b141fdf"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:38:57 crc kubenswrapper[4689]: I1210 12:38:57.697155 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4afb5506-86b9-45fc-9a95-a0336b141fdf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4afb5506-86b9-45fc-9a95-a0336b141fdf" (UID: "4afb5506-86b9-45fc-9a95-a0336b141fdf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:38:57 crc kubenswrapper[4689]: I1210 12:38:57.715676 4689 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4afb5506-86b9-45fc-9a95-a0336b141fdf-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:57 crc kubenswrapper[4689]: I1210 12:38:57.715715 4689 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4afb5506-86b9-45fc-9a95-a0336b141fdf-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:57 crc kubenswrapper[4689]: I1210 12:38:57.715729 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4afb5506-86b9-45fc-9a95-a0336b141fdf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:57 crc kubenswrapper[4689]: I1210 12:38:57.715741 4689 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4afb5506-86b9-45fc-9a95-a0336b141fdf-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:57 crc kubenswrapper[4689]: I1210 12:38:57.715752 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df75z\" (UniqueName: \"kubernetes.io/projected/4afb5506-86b9-45fc-9a95-a0336b141fdf-kube-api-access-df75z\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:57 crc kubenswrapper[4689]: I1210 12:38:57.715762 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4afb5506-86b9-45fc-9a95-a0336b141fdf-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:57 crc kubenswrapper[4689]: I1210 12:38:57.739627 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4afb5506-86b9-45fc-9a95-a0336b141fdf-config-data" (OuterVolumeSpecName: "config-data") pod "4afb5506-86b9-45fc-9a95-a0336b141fdf" (UID: "4afb5506-86b9-45fc-9a95-a0336b141fdf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:38:57 crc kubenswrapper[4689]: I1210 12:38:57.816855 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4afb5506-86b9-45fc-9a95-a0336b141fdf-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.012931 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4afb5506-86b9-45fc-9a95-a0336b141fdf","Type":"ContainerDied","Data":"27bf0af067b51feb36723c5fec783ad75f5d66402ed478cf667d46c4bf142520"} Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.012994 4689 scope.go:117] "RemoveContainer" containerID="606c1f90985c57574aa73e9820a13f30555040f6c2438c9450cbad7c37324ea3" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.013049 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.013643 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.023099 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.102161 4689 scope.go:117] "RemoveContainer" containerID="fe76e003eb9df787fc8f5c7305367b5f2944cb351553c59f3e08cb941a45de8d" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.111253 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.118219 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.128664 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:38:58 crc kubenswrapper[4689]: E1210 12:38:58.129185 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4afb5506-86b9-45fc-9a95-a0336b141fdf" containerName="proxy-httpd" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.129202 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="4afb5506-86b9-45fc-9a95-a0336b141fdf" containerName="proxy-httpd" Dec 10 12:38:58 crc kubenswrapper[4689]: E1210 12:38:58.129225 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4afb5506-86b9-45fc-9a95-a0336b141fdf" containerName="ceilometer-central-agent" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.129233 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="4afb5506-86b9-45fc-9a95-a0336b141fdf" containerName="ceilometer-central-agent" Dec 10 12:38:58 crc kubenswrapper[4689]: E1210 12:38:58.129242 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4afb5506-86b9-45fc-9a95-a0336b141fdf" containerName="sg-core" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.129249 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="4afb5506-86b9-45fc-9a95-a0336b141fdf" containerName="sg-core" Dec 10 12:38:58 crc kubenswrapper[4689]: E1210 12:38:58.129274 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4afb5506-86b9-45fc-9a95-a0336b141fdf" containerName="ceilometer-notification-agent" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.129282 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="4afb5506-86b9-45fc-9a95-a0336b141fdf" containerName="ceilometer-notification-agent" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.129507 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="4afb5506-86b9-45fc-9a95-a0336b141fdf" containerName="sg-core" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.129537 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="4afb5506-86b9-45fc-9a95-a0336b141fdf" containerName="ceilometer-notification-agent" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.129555 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="4afb5506-86b9-45fc-9a95-a0336b141fdf" containerName="proxy-httpd" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.129569 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="4afb5506-86b9-45fc-9a95-a0336b141fdf" containerName="ceilometer-central-agent" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.132573 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.133747 4689 scope.go:117] "RemoveContainer" containerID="c6d4e0648bb894bd1439e384a23e5c1d30fc6435f3b1b7f847fbfc73e888ecf7" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.134546 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.136395 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.137232 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.141731 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.184615 4689 scope.go:117] "RemoveContainer" containerID="71366f32f1e3594b2d26845c8b2c3ad5847b3af34092cbdf610384ffcc7994d9" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.237621 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9c7f257-7dbf-4a84-8b3f-060db6f93454-run-httpd\") pod \"ceilometer-0\" (UID: \"b9c7f257-7dbf-4a84-8b3f-060db6f93454\") " pod="openstack/ceilometer-0" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.237666 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b9c7f257-7dbf-4a84-8b3f-060db6f93454-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b9c7f257-7dbf-4a84-8b3f-060db6f93454\") " pod="openstack/ceilometer-0" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.237685 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9c7f257-7dbf-4a84-8b3f-060db6f93454-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b9c7f257-7dbf-4a84-8b3f-060db6f93454\") " pod="openstack/ceilometer-0" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.237706 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9c7f257-7dbf-4a84-8b3f-060db6f93454-config-data\") pod \"ceilometer-0\" (UID: \"b9c7f257-7dbf-4a84-8b3f-060db6f93454\") " pod="openstack/ceilometer-0" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.237729 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9c7f257-7dbf-4a84-8b3f-060db6f93454-scripts\") pod \"ceilometer-0\" (UID: \"b9c7f257-7dbf-4a84-8b3f-060db6f93454\") " pod="openstack/ceilometer-0" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.237745 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9c7f257-7dbf-4a84-8b3f-060db6f93454-log-httpd\") pod \"ceilometer-0\" (UID: \"b9c7f257-7dbf-4a84-8b3f-060db6f93454\") " pod="openstack/ceilometer-0" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.237831 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b66l5\" (UniqueName: \"kubernetes.io/projected/b9c7f257-7dbf-4a84-8b3f-060db6f93454-kube-api-access-b66l5\") pod \"ceilometer-0\" (UID: \"b9c7f257-7dbf-4a84-8b3f-060db6f93454\") " pod="openstack/ceilometer-0" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.237889 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9c7f257-7dbf-4a84-8b3f-060db6f93454-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b9c7f257-7dbf-4a84-8b3f-060db6f93454\") " pod="openstack/ceilometer-0" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.339315 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b66l5\" (UniqueName: \"kubernetes.io/projected/b9c7f257-7dbf-4a84-8b3f-060db6f93454-kube-api-access-b66l5\") pod \"ceilometer-0\" (UID: \"b9c7f257-7dbf-4a84-8b3f-060db6f93454\") " pod="openstack/ceilometer-0" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.339377 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9c7f257-7dbf-4a84-8b3f-060db6f93454-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b9c7f257-7dbf-4a84-8b3f-060db6f93454\") " pod="openstack/ceilometer-0" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.339427 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9c7f257-7dbf-4a84-8b3f-060db6f93454-run-httpd\") pod \"ceilometer-0\" (UID: \"b9c7f257-7dbf-4a84-8b3f-060db6f93454\") " pod="openstack/ceilometer-0" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.339450 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b9c7f257-7dbf-4a84-8b3f-060db6f93454-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b9c7f257-7dbf-4a84-8b3f-060db6f93454\") " pod="openstack/ceilometer-0" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.339466 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9c7f257-7dbf-4a84-8b3f-060db6f93454-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b9c7f257-7dbf-4a84-8b3f-060db6f93454\") " pod="openstack/ceilometer-0" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.339487 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9c7f257-7dbf-4a84-8b3f-060db6f93454-config-data\") pod \"ceilometer-0\" (UID: \"b9c7f257-7dbf-4a84-8b3f-060db6f93454\") " pod="openstack/ceilometer-0" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.339505 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9c7f257-7dbf-4a84-8b3f-060db6f93454-scripts\") pod \"ceilometer-0\" (UID: \"b9c7f257-7dbf-4a84-8b3f-060db6f93454\") " pod="openstack/ceilometer-0" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.339520 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9c7f257-7dbf-4a84-8b3f-060db6f93454-log-httpd\") pod \"ceilometer-0\" (UID: \"b9c7f257-7dbf-4a84-8b3f-060db6f93454\") " pod="openstack/ceilometer-0" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.339960 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9c7f257-7dbf-4a84-8b3f-060db6f93454-log-httpd\") pod \"ceilometer-0\" (UID: \"b9c7f257-7dbf-4a84-8b3f-060db6f93454\") " pod="openstack/ceilometer-0" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.340247 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9c7f257-7dbf-4a84-8b3f-060db6f93454-run-httpd\") pod \"ceilometer-0\" (UID: \"b9c7f257-7dbf-4a84-8b3f-060db6f93454\") " pod="openstack/ceilometer-0" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.343310 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9c7f257-7dbf-4a84-8b3f-060db6f93454-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b9c7f257-7dbf-4a84-8b3f-060db6f93454\") " pod="openstack/ceilometer-0" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.343331 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9c7f257-7dbf-4a84-8b3f-060db6f93454-scripts\") pod \"ceilometer-0\" (UID: \"b9c7f257-7dbf-4a84-8b3f-060db6f93454\") " pod="openstack/ceilometer-0" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.344249 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9c7f257-7dbf-4a84-8b3f-060db6f93454-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b9c7f257-7dbf-4a84-8b3f-060db6f93454\") " pod="openstack/ceilometer-0" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.344449 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9c7f257-7dbf-4a84-8b3f-060db6f93454-config-data\") pod \"ceilometer-0\" (UID: \"b9c7f257-7dbf-4a84-8b3f-060db6f93454\") " pod="openstack/ceilometer-0" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.349339 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b9c7f257-7dbf-4a84-8b3f-060db6f93454-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b9c7f257-7dbf-4a84-8b3f-060db6f93454\") " pod="openstack/ceilometer-0" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.354485 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b66l5\" (UniqueName: \"kubernetes.io/projected/b9c7f257-7dbf-4a84-8b3f-060db6f93454-kube-api-access-b66l5\") pod \"ceilometer-0\" (UID: \"b9c7f257-7dbf-4a84-8b3f-060db6f93454\") " pod="openstack/ceilometer-0" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.465879 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.512118 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4afb5506-86b9-45fc-9a95-a0336b141fdf" path="/var/lib/kubelet/pods/4afb5506-86b9-45fc-9a95-a0336b141fdf/volumes" Dec 10 12:38:58 crc kubenswrapper[4689]: I1210 12:38:58.930354 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 12:38:59 crc kubenswrapper[4689]: I1210 12:38:59.027654 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9c7f257-7dbf-4a84-8b3f-060db6f93454","Type":"ContainerStarted","Data":"9d9a3d55150111873c174226945ab6218487a58432c523b62fedd9bc3405a040"} Dec 10 12:39:00 crc kubenswrapper[4689]: I1210 12:39:00.040504 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9c7f257-7dbf-4a84-8b3f-060db6f93454","Type":"ContainerStarted","Data":"31cf8a3856b75e5fd12bd3924466648346e1b0fdde1cde1867cff9c969235d21"} Dec 10 12:39:02 crc kubenswrapper[4689]: I1210 12:39:02.060756 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9c7f257-7dbf-4a84-8b3f-060db6f93454","Type":"ContainerStarted","Data":"81694c1b39af4e9c0e264f1508064921f2efca84862273abe51949182335a633"} Dec 10 12:39:02 crc kubenswrapper[4689]: I1210 12:39:02.346254 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 10 12:39:03 crc kubenswrapper[4689]: I1210 12:39:03.078250 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9c7f257-7dbf-4a84-8b3f-060db6f93454","Type":"ContainerStarted","Data":"02ee9ebb553111c48659582df55ca02e974c81631838e58a1d2020957cd8bce5"} Dec 10 12:39:04 crc kubenswrapper[4689]: I1210 12:39:04.096528 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9c7f257-7dbf-4a84-8b3f-060db6f93454","Type":"ContainerStarted","Data":"1dadfba545371c6ce17b2312ea659742c8072f92bb08cc3cd9227b0036e15bf1"} Dec 10 12:39:04 crc kubenswrapper[4689]: I1210 12:39:04.098201 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 10 12:39:04 crc kubenswrapper[4689]: I1210 12:39:04.120031 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9856001559999998 podStartE2EDuration="6.120012721s" podCreationTimestamp="2025-12-10 12:38:58 +0000 UTC" firstStartedPulling="2025-12-10 12:38:58.933898602 +0000 UTC m=+1406.721979740" lastFinishedPulling="2025-12-10 12:39:03.068311167 +0000 UTC m=+1410.856392305" observedRunningTime="2025-12-10 12:39:04.116676399 +0000 UTC m=+1411.904757537" watchObservedRunningTime="2025-12-10 12:39:04.120012721 +0000 UTC m=+1411.908093859" Dec 10 12:39:09 crc kubenswrapper[4689]: I1210 12:39:09.313071 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4kzt4"] Dec 10 12:39:09 crc kubenswrapper[4689]: I1210 12:39:09.316145 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4kzt4" Dec 10 12:39:09 crc kubenswrapper[4689]: I1210 12:39:09.335341 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4kzt4"] Dec 10 12:39:09 crc kubenswrapper[4689]: I1210 12:39:09.365449 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c8e26eb-edee-4bf3-9576-bbe83762d20e-utilities\") pod \"redhat-operators-4kzt4\" (UID: \"3c8e26eb-edee-4bf3-9576-bbe83762d20e\") " pod="openshift-marketplace/redhat-operators-4kzt4" Dec 10 12:39:09 crc kubenswrapper[4689]: I1210 12:39:09.365710 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c8e26eb-edee-4bf3-9576-bbe83762d20e-catalog-content\") pod \"redhat-operators-4kzt4\" (UID: \"3c8e26eb-edee-4bf3-9576-bbe83762d20e\") " pod="openshift-marketplace/redhat-operators-4kzt4" Dec 10 12:39:09 crc kubenswrapper[4689]: I1210 12:39:09.365777 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqgjj\" (UniqueName: \"kubernetes.io/projected/3c8e26eb-edee-4bf3-9576-bbe83762d20e-kube-api-access-fqgjj\") pod \"redhat-operators-4kzt4\" (UID: \"3c8e26eb-edee-4bf3-9576-bbe83762d20e\") " pod="openshift-marketplace/redhat-operators-4kzt4" Dec 10 12:39:09 crc kubenswrapper[4689]: I1210 12:39:09.468834 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c8e26eb-edee-4bf3-9576-bbe83762d20e-utilities\") pod \"redhat-operators-4kzt4\" (UID: \"3c8e26eb-edee-4bf3-9576-bbe83762d20e\") " pod="openshift-marketplace/redhat-operators-4kzt4" Dec 10 12:39:09 crc kubenswrapper[4689]: I1210 12:39:09.469461 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c8e26eb-edee-4bf3-9576-bbe83762d20e-utilities\") pod \"redhat-operators-4kzt4\" (UID: \"3c8e26eb-edee-4bf3-9576-bbe83762d20e\") " pod="openshift-marketplace/redhat-operators-4kzt4" Dec 10 12:39:09 crc kubenswrapper[4689]: I1210 12:39:09.469675 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c8e26eb-edee-4bf3-9576-bbe83762d20e-catalog-content\") pod \"redhat-operators-4kzt4\" (UID: \"3c8e26eb-edee-4bf3-9576-bbe83762d20e\") " pod="openshift-marketplace/redhat-operators-4kzt4" Dec 10 12:39:09 crc kubenswrapper[4689]: I1210 12:39:09.469906 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqgjj\" (UniqueName: \"kubernetes.io/projected/3c8e26eb-edee-4bf3-9576-bbe83762d20e-kube-api-access-fqgjj\") pod \"redhat-operators-4kzt4\" (UID: \"3c8e26eb-edee-4bf3-9576-bbe83762d20e\") " pod="openshift-marketplace/redhat-operators-4kzt4" Dec 10 12:39:09 crc kubenswrapper[4689]: I1210 12:39:09.470075 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c8e26eb-edee-4bf3-9576-bbe83762d20e-catalog-content\") pod \"redhat-operators-4kzt4\" (UID: \"3c8e26eb-edee-4bf3-9576-bbe83762d20e\") " pod="openshift-marketplace/redhat-operators-4kzt4" Dec 10 12:39:09 crc kubenswrapper[4689]: I1210 12:39:09.493343 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqgjj\" (UniqueName: \"kubernetes.io/projected/3c8e26eb-edee-4bf3-9576-bbe83762d20e-kube-api-access-fqgjj\") pod \"redhat-operators-4kzt4\" (UID: \"3c8e26eb-edee-4bf3-9576-bbe83762d20e\") " pod="openshift-marketplace/redhat-operators-4kzt4" Dec 10 12:39:09 crc kubenswrapper[4689]: I1210 12:39:09.690134 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4kzt4" Dec 10 12:39:10 crc kubenswrapper[4689]: I1210 12:39:10.163346 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4kzt4"] Dec 10 12:39:11 crc kubenswrapper[4689]: I1210 12:39:11.165861 4689 generic.go:334] "Generic (PLEG): container finished" podID="3c8e26eb-edee-4bf3-9576-bbe83762d20e" containerID="72b26b05c5840c36b9b54430300c5799056374af6913ed6bc8048ef9f1c008e6" exitCode=0 Dec 10 12:39:11 crc kubenswrapper[4689]: I1210 12:39:11.165951 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4kzt4" event={"ID":"3c8e26eb-edee-4bf3-9576-bbe83762d20e","Type":"ContainerDied","Data":"72b26b05c5840c36b9b54430300c5799056374af6913ed6bc8048ef9f1c008e6"} Dec 10 12:39:11 crc kubenswrapper[4689]: I1210 12:39:11.166184 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4kzt4" event={"ID":"3c8e26eb-edee-4bf3-9576-bbe83762d20e","Type":"ContainerStarted","Data":"be1a2033ab2f955a1786e1c364f24686720471194de6eb6d3dc05d07a624df37"} Dec 10 12:39:12 crc kubenswrapper[4689]: I1210 12:39:12.176308 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4kzt4" event={"ID":"3c8e26eb-edee-4bf3-9576-bbe83762d20e","Type":"ContainerStarted","Data":"34d93ec0e093838d4c25790fe5ee45cea842ee21e578f5aeadfaf784bfa2da0f"} Dec 10 12:39:13 crc kubenswrapper[4689]: I1210 12:39:13.189999 4689 generic.go:334] "Generic (PLEG): container finished" podID="3c8e26eb-edee-4bf3-9576-bbe83762d20e" containerID="34d93ec0e093838d4c25790fe5ee45cea842ee21e578f5aeadfaf784bfa2da0f" exitCode=0 Dec 10 12:39:13 crc kubenswrapper[4689]: I1210 12:39:13.190102 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4kzt4" event={"ID":"3c8e26eb-edee-4bf3-9576-bbe83762d20e","Type":"ContainerDied","Data":"34d93ec0e093838d4c25790fe5ee45cea842ee21e578f5aeadfaf784bfa2da0f"} Dec 10 12:39:14 crc kubenswrapper[4689]: I1210 12:39:14.203808 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4kzt4" event={"ID":"3c8e26eb-edee-4bf3-9576-bbe83762d20e","Type":"ContainerStarted","Data":"113e994ca0b58bc83f111fcaa0551a50206fbd21e5e1f7600b2e71d3b4f37b82"} Dec 10 12:39:14 crc kubenswrapper[4689]: I1210 12:39:14.231583 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4kzt4" podStartSLOduration=2.743062791 podStartE2EDuration="5.231568658s" podCreationTimestamp="2025-12-10 12:39:09 +0000 UTC" firstStartedPulling="2025-12-10 12:39:11.168015611 +0000 UTC m=+1418.956096759" lastFinishedPulling="2025-12-10 12:39:13.656521478 +0000 UTC m=+1421.444602626" observedRunningTime="2025-12-10 12:39:14.224988635 +0000 UTC m=+1422.013069773" watchObservedRunningTime="2025-12-10 12:39:14.231568658 +0000 UTC m=+1422.019649796" Dec 10 12:39:19 crc kubenswrapper[4689]: I1210 12:39:19.690298 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4kzt4" Dec 10 12:39:19 crc kubenswrapper[4689]: I1210 12:39:19.690898 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4kzt4" Dec 10 12:39:20 crc kubenswrapper[4689]: I1210 12:39:20.743999 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4kzt4" podUID="3c8e26eb-edee-4bf3-9576-bbe83762d20e" containerName="registry-server" probeResult="failure" output=< Dec 10 12:39:20 crc kubenswrapper[4689]: timeout: failed to connect service ":50051" within 1s Dec 10 12:39:20 crc kubenswrapper[4689]: > Dec 10 12:39:28 crc kubenswrapper[4689]: I1210 12:39:28.482372 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 10 12:39:29 crc kubenswrapper[4689]: I1210 12:39:29.766234 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4kzt4" Dec 10 12:39:29 crc kubenswrapper[4689]: I1210 12:39:29.844382 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4kzt4" Dec 10 12:39:30 crc kubenswrapper[4689]: I1210 12:39:30.018897 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4kzt4"] Dec 10 12:39:31 crc kubenswrapper[4689]: I1210 12:39:31.433578 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4kzt4" podUID="3c8e26eb-edee-4bf3-9576-bbe83762d20e" containerName="registry-server" containerID="cri-o://113e994ca0b58bc83f111fcaa0551a50206fbd21e5e1f7600b2e71d3b4f37b82" gracePeriod=2 Dec 10 12:39:31 crc kubenswrapper[4689]: I1210 12:39:31.936263 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4kzt4" Dec 10 12:39:32 crc kubenswrapper[4689]: I1210 12:39:32.035690 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c8e26eb-edee-4bf3-9576-bbe83762d20e-utilities\") pod \"3c8e26eb-edee-4bf3-9576-bbe83762d20e\" (UID: \"3c8e26eb-edee-4bf3-9576-bbe83762d20e\") " Dec 10 12:39:32 crc kubenswrapper[4689]: I1210 12:39:32.035739 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c8e26eb-edee-4bf3-9576-bbe83762d20e-catalog-content\") pod \"3c8e26eb-edee-4bf3-9576-bbe83762d20e\" (UID: \"3c8e26eb-edee-4bf3-9576-bbe83762d20e\") " Dec 10 12:39:32 crc kubenswrapper[4689]: I1210 12:39:32.035813 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqgjj\" (UniqueName: \"kubernetes.io/projected/3c8e26eb-edee-4bf3-9576-bbe83762d20e-kube-api-access-fqgjj\") pod \"3c8e26eb-edee-4bf3-9576-bbe83762d20e\" (UID: \"3c8e26eb-edee-4bf3-9576-bbe83762d20e\") " Dec 10 12:39:32 crc kubenswrapper[4689]: I1210 12:39:32.036702 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c8e26eb-edee-4bf3-9576-bbe83762d20e-utilities" (OuterVolumeSpecName: "utilities") pod "3c8e26eb-edee-4bf3-9576-bbe83762d20e" (UID: "3c8e26eb-edee-4bf3-9576-bbe83762d20e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:39:32 crc kubenswrapper[4689]: I1210 12:39:32.043168 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c8e26eb-edee-4bf3-9576-bbe83762d20e-kube-api-access-fqgjj" (OuterVolumeSpecName: "kube-api-access-fqgjj") pod "3c8e26eb-edee-4bf3-9576-bbe83762d20e" (UID: "3c8e26eb-edee-4bf3-9576-bbe83762d20e"). InnerVolumeSpecName "kube-api-access-fqgjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:39:32 crc kubenswrapper[4689]: I1210 12:39:32.138458 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c8e26eb-edee-4bf3-9576-bbe83762d20e-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 12:39:32 crc kubenswrapper[4689]: I1210 12:39:32.138775 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqgjj\" (UniqueName: \"kubernetes.io/projected/3c8e26eb-edee-4bf3-9576-bbe83762d20e-kube-api-access-fqgjj\") on node \"crc\" DevicePath \"\"" Dec 10 12:39:32 crc kubenswrapper[4689]: I1210 12:39:32.146869 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c8e26eb-edee-4bf3-9576-bbe83762d20e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c8e26eb-edee-4bf3-9576-bbe83762d20e" (UID: "3c8e26eb-edee-4bf3-9576-bbe83762d20e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:39:32 crc kubenswrapper[4689]: I1210 12:39:32.240617 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c8e26eb-edee-4bf3-9576-bbe83762d20e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 12:39:32 crc kubenswrapper[4689]: I1210 12:39:32.463525 4689 generic.go:334] "Generic (PLEG): container finished" podID="3c8e26eb-edee-4bf3-9576-bbe83762d20e" containerID="113e994ca0b58bc83f111fcaa0551a50206fbd21e5e1f7600b2e71d3b4f37b82" exitCode=0 Dec 10 12:39:32 crc kubenswrapper[4689]: I1210 12:39:32.463598 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4kzt4" event={"ID":"3c8e26eb-edee-4bf3-9576-bbe83762d20e","Type":"ContainerDied","Data":"113e994ca0b58bc83f111fcaa0551a50206fbd21e5e1f7600b2e71d3b4f37b82"} Dec 10 12:39:32 crc kubenswrapper[4689]: I1210 12:39:32.463675 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4kzt4" event={"ID":"3c8e26eb-edee-4bf3-9576-bbe83762d20e","Type":"ContainerDied","Data":"be1a2033ab2f955a1786e1c364f24686720471194de6eb6d3dc05d07a624df37"} Dec 10 12:39:32 crc kubenswrapper[4689]: I1210 12:39:32.463705 4689 scope.go:117] "RemoveContainer" containerID="113e994ca0b58bc83f111fcaa0551a50206fbd21e5e1f7600b2e71d3b4f37b82" Dec 10 12:39:32 crc kubenswrapper[4689]: I1210 12:39:32.465187 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4kzt4" Dec 10 12:39:32 crc kubenswrapper[4689]: I1210 12:39:32.493115 4689 scope.go:117] "RemoveContainer" containerID="34d93ec0e093838d4c25790fe5ee45cea842ee21e578f5aeadfaf784bfa2da0f" Dec 10 12:39:32 crc kubenswrapper[4689]: I1210 12:39:32.514656 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4kzt4"] Dec 10 12:39:32 crc kubenswrapper[4689]: I1210 12:39:32.524239 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4kzt4"] Dec 10 12:39:32 crc kubenswrapper[4689]: I1210 12:39:32.526075 4689 scope.go:117] "RemoveContainer" containerID="72b26b05c5840c36b9b54430300c5799056374af6913ed6bc8048ef9f1c008e6" Dec 10 12:39:32 crc kubenswrapper[4689]: I1210 12:39:32.564790 4689 scope.go:117] "RemoveContainer" containerID="113e994ca0b58bc83f111fcaa0551a50206fbd21e5e1f7600b2e71d3b4f37b82" Dec 10 12:39:32 crc kubenswrapper[4689]: E1210 12:39:32.565196 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"113e994ca0b58bc83f111fcaa0551a50206fbd21e5e1f7600b2e71d3b4f37b82\": container with ID starting with 113e994ca0b58bc83f111fcaa0551a50206fbd21e5e1f7600b2e71d3b4f37b82 not found: ID does not exist" containerID="113e994ca0b58bc83f111fcaa0551a50206fbd21e5e1f7600b2e71d3b4f37b82" Dec 10 12:39:32 crc kubenswrapper[4689]: I1210 12:39:32.565245 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"113e994ca0b58bc83f111fcaa0551a50206fbd21e5e1f7600b2e71d3b4f37b82"} err="failed to get container status \"113e994ca0b58bc83f111fcaa0551a50206fbd21e5e1f7600b2e71d3b4f37b82\": rpc error: code = NotFound desc = could not find container \"113e994ca0b58bc83f111fcaa0551a50206fbd21e5e1f7600b2e71d3b4f37b82\": container with ID starting with 113e994ca0b58bc83f111fcaa0551a50206fbd21e5e1f7600b2e71d3b4f37b82 not found: ID does not exist" Dec 10 12:39:32 crc kubenswrapper[4689]: I1210 12:39:32.565277 4689 scope.go:117] "RemoveContainer" containerID="34d93ec0e093838d4c25790fe5ee45cea842ee21e578f5aeadfaf784bfa2da0f" Dec 10 12:39:32 crc kubenswrapper[4689]: E1210 12:39:32.565653 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34d93ec0e093838d4c25790fe5ee45cea842ee21e578f5aeadfaf784bfa2da0f\": container with ID starting with 34d93ec0e093838d4c25790fe5ee45cea842ee21e578f5aeadfaf784bfa2da0f not found: ID does not exist" containerID="34d93ec0e093838d4c25790fe5ee45cea842ee21e578f5aeadfaf784bfa2da0f" Dec 10 12:39:32 crc kubenswrapper[4689]: I1210 12:39:32.565684 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34d93ec0e093838d4c25790fe5ee45cea842ee21e578f5aeadfaf784bfa2da0f"} err="failed to get container status \"34d93ec0e093838d4c25790fe5ee45cea842ee21e578f5aeadfaf784bfa2da0f\": rpc error: code = NotFound desc = could not find container \"34d93ec0e093838d4c25790fe5ee45cea842ee21e578f5aeadfaf784bfa2da0f\": container with ID starting with 34d93ec0e093838d4c25790fe5ee45cea842ee21e578f5aeadfaf784bfa2da0f not found: ID does not exist" Dec 10 12:39:32 crc kubenswrapper[4689]: I1210 12:39:32.565705 4689 scope.go:117] "RemoveContainer" containerID="72b26b05c5840c36b9b54430300c5799056374af6913ed6bc8048ef9f1c008e6" Dec 10 12:39:32 crc kubenswrapper[4689]: E1210 12:39:32.565917 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72b26b05c5840c36b9b54430300c5799056374af6913ed6bc8048ef9f1c008e6\": container with ID starting with 72b26b05c5840c36b9b54430300c5799056374af6913ed6bc8048ef9f1c008e6 not found: ID does not exist" containerID="72b26b05c5840c36b9b54430300c5799056374af6913ed6bc8048ef9f1c008e6" Dec 10 12:39:32 crc kubenswrapper[4689]: I1210 12:39:32.565938 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72b26b05c5840c36b9b54430300c5799056374af6913ed6bc8048ef9f1c008e6"} err="failed to get container status \"72b26b05c5840c36b9b54430300c5799056374af6913ed6bc8048ef9f1c008e6\": rpc error: code = NotFound desc = could not find container \"72b26b05c5840c36b9b54430300c5799056374af6913ed6bc8048ef9f1c008e6\": container with ID starting with 72b26b05c5840c36b9b54430300c5799056374af6913ed6bc8048ef9f1c008e6 not found: ID does not exist" Dec 10 12:39:34 crc kubenswrapper[4689]: I1210 12:39:34.508778 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c8e26eb-edee-4bf3-9576-bbe83762d20e" path="/var/lib/kubelet/pods/3c8e26eb-edee-4bf3-9576-bbe83762d20e/volumes" Dec 10 12:39:39 crc kubenswrapper[4689]: I1210 12:39:39.251815 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 12:39:40 crc kubenswrapper[4689]: I1210 12:39:40.242786 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 12:39:43 crc kubenswrapper[4689]: I1210 12:39:43.708145 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="33bee83d-eb0f-4e5e-9617-f8102008436a" containerName="rabbitmq" containerID="cri-o://e6d8bcf4b421eb9c4687520f6c4d62bb997ea71c12e38954d55b5bad9c38c831" gracePeriod=604796 Dec 10 12:39:44 crc kubenswrapper[4689]: I1210 12:39:44.332121 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="0b5ff1d1-6330-4077-9b2d-dbec926d9e8c" containerName="rabbitmq" containerID="cri-o://8507c9ed5d3258efd18afab788094f5fef6b1366795b00ac02cb6368d575d696" gracePeriod=604796 Dec 10 12:39:46 crc kubenswrapper[4689]: I1210 12:39:46.566251 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="33bee83d-eb0f-4e5e-9617-f8102008436a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Dec 10 12:39:46 crc kubenswrapper[4689]: I1210 12:39:46.631709 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="0b5ff1d1-6330-4077-9b2d-dbec926d9e8c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.024132 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tf7c9"] Dec 10 12:39:50 crc kubenswrapper[4689]: E1210 12:39:50.025325 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c8e26eb-edee-4bf3-9576-bbe83762d20e" containerName="extract-content" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.025358 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c8e26eb-edee-4bf3-9576-bbe83762d20e" containerName="extract-content" Dec 10 12:39:50 crc kubenswrapper[4689]: E1210 12:39:50.025416 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c8e26eb-edee-4bf3-9576-bbe83762d20e" containerName="extract-utilities" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.025434 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c8e26eb-edee-4bf3-9576-bbe83762d20e" containerName="extract-utilities" Dec 10 12:39:50 crc kubenswrapper[4689]: E1210 12:39:50.025502 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c8e26eb-edee-4bf3-9576-bbe83762d20e" containerName="registry-server" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.025519 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c8e26eb-edee-4bf3-9576-bbe83762d20e" containerName="registry-server" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.026119 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c8e26eb-edee-4bf3-9576-bbe83762d20e" containerName="registry-server" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.029369 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tf7c9" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.041628 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tf7c9"] Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.191811 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4m8s\" (UniqueName: \"kubernetes.io/projected/0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7-kube-api-access-c4m8s\") pod \"community-operators-tf7c9\" (UID: \"0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7\") " pod="openshift-marketplace/community-operators-tf7c9" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.192140 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7-catalog-content\") pod \"community-operators-tf7c9\" (UID: \"0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7\") " pod="openshift-marketplace/community-operators-tf7c9" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.192692 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7-utilities\") pod \"community-operators-tf7c9\" (UID: \"0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7\") " pod="openshift-marketplace/community-operators-tf7c9" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.294217 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7-utilities\") pod \"community-operators-tf7c9\" (UID: \"0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7\") " pod="openshift-marketplace/community-operators-tf7c9" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.294315 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4m8s\" (UniqueName: \"kubernetes.io/projected/0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7-kube-api-access-c4m8s\") pod \"community-operators-tf7c9\" (UID: \"0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7\") " pod="openshift-marketplace/community-operators-tf7c9" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.294424 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7-catalog-content\") pod \"community-operators-tf7c9\" (UID: \"0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7\") " pod="openshift-marketplace/community-operators-tf7c9" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.294787 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7-utilities\") pod \"community-operators-tf7c9\" (UID: \"0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7\") " pod="openshift-marketplace/community-operators-tf7c9" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.294839 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7-catalog-content\") pod \"community-operators-tf7c9\" (UID: \"0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7\") " pod="openshift-marketplace/community-operators-tf7c9" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.316249 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4m8s\" (UniqueName: \"kubernetes.io/projected/0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7-kube-api-access-c4m8s\") pod \"community-operators-tf7c9\" (UID: \"0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7\") " pod="openshift-marketplace/community-operators-tf7c9" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.368375 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tf7c9" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.403766 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.610879 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/33bee83d-eb0f-4e5e-9617-f8102008436a-erlang-cookie-secret\") pod \"33bee83d-eb0f-4e5e-9617-f8102008436a\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.610939 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/33bee83d-eb0f-4e5e-9617-f8102008436a-server-conf\") pod \"33bee83d-eb0f-4e5e-9617-f8102008436a\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.611031 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"33bee83d-eb0f-4e5e-9617-f8102008436a\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.611061 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqp6r\" (UniqueName: \"kubernetes.io/projected/33bee83d-eb0f-4e5e-9617-f8102008436a-kube-api-access-dqp6r\") pod \"33bee83d-eb0f-4e5e-9617-f8102008436a\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.611099 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/33bee83d-eb0f-4e5e-9617-f8102008436a-rabbitmq-erlang-cookie\") pod \"33bee83d-eb0f-4e5e-9617-f8102008436a\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.611120 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/33bee83d-eb0f-4e5e-9617-f8102008436a-config-data\") pod \"33bee83d-eb0f-4e5e-9617-f8102008436a\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.611162 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/33bee83d-eb0f-4e5e-9617-f8102008436a-pod-info\") pod \"33bee83d-eb0f-4e5e-9617-f8102008436a\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.611253 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/33bee83d-eb0f-4e5e-9617-f8102008436a-rabbitmq-plugins\") pod \"33bee83d-eb0f-4e5e-9617-f8102008436a\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.611294 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/33bee83d-eb0f-4e5e-9617-f8102008436a-plugins-conf\") pod \"33bee83d-eb0f-4e5e-9617-f8102008436a\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.611323 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/33bee83d-eb0f-4e5e-9617-f8102008436a-rabbitmq-confd\") pod \"33bee83d-eb0f-4e5e-9617-f8102008436a\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.611355 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/33bee83d-eb0f-4e5e-9617-f8102008436a-rabbitmq-tls\") pod \"33bee83d-eb0f-4e5e-9617-f8102008436a\" (UID: \"33bee83d-eb0f-4e5e-9617-f8102008436a\") " Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.613340 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33bee83d-eb0f-4e5e-9617-f8102008436a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "33bee83d-eb0f-4e5e-9617-f8102008436a" (UID: "33bee83d-eb0f-4e5e-9617-f8102008436a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.620153 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33bee83d-eb0f-4e5e-9617-f8102008436a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "33bee83d-eb0f-4e5e-9617-f8102008436a" (UID: "33bee83d-eb0f-4e5e-9617-f8102008436a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.622397 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33bee83d-eb0f-4e5e-9617-f8102008436a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "33bee83d-eb0f-4e5e-9617-f8102008436a" (UID: "33bee83d-eb0f-4e5e-9617-f8102008436a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.630069 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33bee83d-eb0f-4e5e-9617-f8102008436a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "33bee83d-eb0f-4e5e-9617-f8102008436a" (UID: "33bee83d-eb0f-4e5e-9617-f8102008436a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.632775 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33bee83d-eb0f-4e5e-9617-f8102008436a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "33bee83d-eb0f-4e5e-9617-f8102008436a" (UID: "33bee83d-eb0f-4e5e-9617-f8102008436a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.656816 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "33bee83d-eb0f-4e5e-9617-f8102008436a" (UID: "33bee83d-eb0f-4e5e-9617-f8102008436a"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.672896 4689 generic.go:334] "Generic (PLEG): container finished" podID="33bee83d-eb0f-4e5e-9617-f8102008436a" containerID="e6d8bcf4b421eb9c4687520f6c4d62bb997ea71c12e38954d55b5bad9c38c831" exitCode=0 Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.672953 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"33bee83d-eb0f-4e5e-9617-f8102008436a","Type":"ContainerDied","Data":"e6d8bcf4b421eb9c4687520f6c4d62bb997ea71c12e38954d55b5bad9c38c831"} Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.673002 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"33bee83d-eb0f-4e5e-9617-f8102008436a","Type":"ContainerDied","Data":"90c8adb886c447912708a6eb929c0ef39c9739a37adcdd927d9f9efa1b9f665b"} Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.673022 4689 scope.go:117] "RemoveContainer" containerID="e6d8bcf4b421eb9c4687520f6c4d62bb997ea71c12e38954d55b5bad9c38c831" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.673157 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.693609 4689 generic.go:334] "Generic (PLEG): container finished" podID="0b5ff1d1-6330-4077-9b2d-dbec926d9e8c" containerID="8507c9ed5d3258efd18afab788094f5fef6b1366795b00ac02cb6368d575d696" exitCode=0 Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.693644 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c","Type":"ContainerDied","Data":"8507c9ed5d3258efd18afab788094f5fef6b1366795b00ac02cb6368d575d696"} Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.696182 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33bee83d-eb0f-4e5e-9617-f8102008436a-kube-api-access-dqp6r" (OuterVolumeSpecName: "kube-api-access-dqp6r") pod "33bee83d-eb0f-4e5e-9617-f8102008436a" (UID: "33bee83d-eb0f-4e5e-9617-f8102008436a"). InnerVolumeSpecName "kube-api-access-dqp6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.696813 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33bee83d-eb0f-4e5e-9617-f8102008436a-config-data" (OuterVolumeSpecName: "config-data") pod "33bee83d-eb0f-4e5e-9617-f8102008436a" (UID: "33bee83d-eb0f-4e5e-9617-f8102008436a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.697696 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/33bee83d-eb0f-4e5e-9617-f8102008436a-pod-info" (OuterVolumeSpecName: "pod-info") pod "33bee83d-eb0f-4e5e-9617-f8102008436a" (UID: "33bee83d-eb0f-4e5e-9617-f8102008436a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.714279 4689 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/33bee83d-eb0f-4e5e-9617-f8102008436a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.714316 4689 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/33bee83d-eb0f-4e5e-9617-f8102008436a-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.714327 4689 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/33bee83d-eb0f-4e5e-9617-f8102008436a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.714338 4689 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/33bee83d-eb0f-4e5e-9617-f8102008436a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.714368 4689 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.714380 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqp6r\" (UniqueName: \"kubernetes.io/projected/33bee83d-eb0f-4e5e-9617-f8102008436a-kube-api-access-dqp6r\") on node \"crc\" DevicePath \"\"" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.714396 4689 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/33bee83d-eb0f-4e5e-9617-f8102008436a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.714410 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/33bee83d-eb0f-4e5e-9617-f8102008436a-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.714423 4689 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/33bee83d-eb0f-4e5e-9617-f8102008436a-pod-info\") on node \"crc\" DevicePath \"\"" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.729335 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33bee83d-eb0f-4e5e-9617-f8102008436a-server-conf" (OuterVolumeSpecName: "server-conf") pod "33bee83d-eb0f-4e5e-9617-f8102008436a" (UID: "33bee83d-eb0f-4e5e-9617-f8102008436a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.737620 4689 scope.go:117] "RemoveContainer" containerID="40884be95475317286239419fb7c4d5aacd4a03014eda7b158d3dc2103efb907" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.746227 4689 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.795905 4689 scope.go:117] "RemoveContainer" containerID="e6d8bcf4b421eb9c4687520f6c4d62bb997ea71c12e38954d55b5bad9c38c831" Dec 10 12:39:50 crc kubenswrapper[4689]: E1210 12:39:50.797434 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6d8bcf4b421eb9c4687520f6c4d62bb997ea71c12e38954d55b5bad9c38c831\": container with ID starting with e6d8bcf4b421eb9c4687520f6c4d62bb997ea71c12e38954d55b5bad9c38c831 not found: ID does not exist" containerID="e6d8bcf4b421eb9c4687520f6c4d62bb997ea71c12e38954d55b5bad9c38c831" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.797478 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6d8bcf4b421eb9c4687520f6c4d62bb997ea71c12e38954d55b5bad9c38c831"} err="failed to get container status \"e6d8bcf4b421eb9c4687520f6c4d62bb997ea71c12e38954d55b5bad9c38c831\": rpc error: code = NotFound desc = could not find container \"e6d8bcf4b421eb9c4687520f6c4d62bb997ea71c12e38954d55b5bad9c38c831\": container with ID starting with e6d8bcf4b421eb9c4687520f6c4d62bb997ea71c12e38954d55b5bad9c38c831 not found: ID does not exist" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.797507 4689 scope.go:117] "RemoveContainer" containerID="40884be95475317286239419fb7c4d5aacd4a03014eda7b158d3dc2103efb907" Dec 10 12:39:50 crc kubenswrapper[4689]: E1210 12:39:50.797759 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40884be95475317286239419fb7c4d5aacd4a03014eda7b158d3dc2103efb907\": container with ID starting with 40884be95475317286239419fb7c4d5aacd4a03014eda7b158d3dc2103efb907 not found: ID does not exist" containerID="40884be95475317286239419fb7c4d5aacd4a03014eda7b158d3dc2103efb907" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.797778 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40884be95475317286239419fb7c4d5aacd4a03014eda7b158d3dc2103efb907"} err="failed to get container status \"40884be95475317286239419fb7c4d5aacd4a03014eda7b158d3dc2103efb907\": rpc error: code = NotFound desc = could not find container \"40884be95475317286239419fb7c4d5aacd4a03014eda7b158d3dc2103efb907\": container with ID starting with 40884be95475317286239419fb7c4d5aacd4a03014eda7b158d3dc2103efb907 not found: ID does not exist" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.819992 4689 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/33bee83d-eb0f-4e5e-9617-f8102008436a-server-conf\") on node \"crc\" DevicePath \"\"" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.820023 4689 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.822672 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33bee83d-eb0f-4e5e-9617-f8102008436a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "33bee83d-eb0f-4e5e-9617-f8102008436a" (UID: "33bee83d-eb0f-4e5e-9617-f8102008436a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:39:50 crc kubenswrapper[4689]: I1210 12:39:50.922344 4689 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/33bee83d-eb0f-4e5e-9617-f8102008436a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.006882 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.022265 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.040158 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 12:39:51 crc kubenswrapper[4689]: E1210 12:39:51.040762 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bee83d-eb0f-4e5e-9617-f8102008436a" containerName="setup-container" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.040779 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bee83d-eb0f-4e5e-9617-f8102008436a" containerName="setup-container" Dec 10 12:39:51 crc kubenswrapper[4689]: E1210 12:39:51.040813 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bee83d-eb0f-4e5e-9617-f8102008436a" containerName="rabbitmq" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.040830 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bee83d-eb0f-4e5e-9617-f8102008436a" containerName="rabbitmq" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.041114 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="33bee83d-eb0f-4e5e-9617-f8102008436a" containerName="rabbitmq" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.042375 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.046495 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.046692 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.046848 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.046965 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-bwmld" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.047137 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.047301 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.047442 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.047622 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.075378 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tf7c9"] Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.228085 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.228180 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/148aec72-272a-46b6-a75f-46dc2b680101-config-data\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.228211 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqv54\" (UniqueName: \"kubernetes.io/projected/148aec72-272a-46b6-a75f-46dc2b680101-kube-api-access-hqv54\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.228247 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/148aec72-272a-46b6-a75f-46dc2b680101-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.228282 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/148aec72-272a-46b6-a75f-46dc2b680101-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.228306 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/148aec72-272a-46b6-a75f-46dc2b680101-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.228348 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/148aec72-272a-46b6-a75f-46dc2b680101-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.228383 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/148aec72-272a-46b6-a75f-46dc2b680101-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.228408 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/148aec72-272a-46b6-a75f-46dc2b680101-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.228459 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/148aec72-272a-46b6-a75f-46dc2b680101-server-conf\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.228489 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/148aec72-272a-46b6-a75f-46dc2b680101-pod-info\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.278418 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.330051 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/148aec72-272a-46b6-a75f-46dc2b680101-pod-info\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.330120 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.330172 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/148aec72-272a-46b6-a75f-46dc2b680101-config-data\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.330191 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqv54\" (UniqueName: \"kubernetes.io/projected/148aec72-272a-46b6-a75f-46dc2b680101-kube-api-access-hqv54\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.330216 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/148aec72-272a-46b6-a75f-46dc2b680101-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.330241 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/148aec72-272a-46b6-a75f-46dc2b680101-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.330257 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/148aec72-272a-46b6-a75f-46dc2b680101-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.330288 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/148aec72-272a-46b6-a75f-46dc2b680101-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.330313 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/148aec72-272a-46b6-a75f-46dc2b680101-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.330342 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/148aec72-272a-46b6-a75f-46dc2b680101-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.330383 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/148aec72-272a-46b6-a75f-46dc2b680101-server-conf\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.331209 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.331538 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/148aec72-272a-46b6-a75f-46dc2b680101-server-conf\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.331788 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/148aec72-272a-46b6-a75f-46dc2b680101-config-data\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.331810 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/148aec72-272a-46b6-a75f-46dc2b680101-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.332051 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/148aec72-272a-46b6-a75f-46dc2b680101-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.332341 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/148aec72-272a-46b6-a75f-46dc2b680101-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.338959 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/148aec72-272a-46b6-a75f-46dc2b680101-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.342705 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/148aec72-272a-46b6-a75f-46dc2b680101-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.342802 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/148aec72-272a-46b6-a75f-46dc2b680101-pod-info\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.342882 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/148aec72-272a-46b6-a75f-46dc2b680101-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.350260 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqv54\" (UniqueName: \"kubernetes.io/projected/148aec72-272a-46b6-a75f-46dc2b680101-kube-api-access-hqv54\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.392231 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"148aec72-272a-46b6-a75f-46dc2b680101\") " pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.431342 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-rabbitmq-erlang-cookie\") pod \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.431398 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-plugins-conf\") pod \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.431472 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-rabbitmq-tls\") pod \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.431533 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-erlang-cookie-secret\") pod \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.431592 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hq78\" (UniqueName: \"kubernetes.io/projected/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-kube-api-access-7hq78\") pod \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.431652 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-rabbitmq-plugins\") pod \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.431682 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.431701 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-server-conf\") pod \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.431746 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-config-data\") pod \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.431773 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-rabbitmq-confd\") pod \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.431793 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-pod-info\") pod \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\" (UID: \"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c\") " Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.431900 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0b5ff1d1-6330-4077-9b2d-dbec926d9e8c" (UID: "0b5ff1d1-6330-4077-9b2d-dbec926d9e8c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.432269 4689 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.432352 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0b5ff1d1-6330-4077-9b2d-dbec926d9e8c" (UID: "0b5ff1d1-6330-4077-9b2d-dbec926d9e8c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.433579 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0b5ff1d1-6330-4077-9b2d-dbec926d9e8c" (UID: "0b5ff1d1-6330-4077-9b2d-dbec926d9e8c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.436023 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0b5ff1d1-6330-4077-9b2d-dbec926d9e8c" (UID: "0b5ff1d1-6330-4077-9b2d-dbec926d9e8c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.436651 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "0b5ff1d1-6330-4077-9b2d-dbec926d9e8c" (UID: "0b5ff1d1-6330-4077-9b2d-dbec926d9e8c"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.439054 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-kube-api-access-7hq78" (OuterVolumeSpecName: "kube-api-access-7hq78") pod "0b5ff1d1-6330-4077-9b2d-dbec926d9e8c" (UID: "0b5ff1d1-6330-4077-9b2d-dbec926d9e8c"). InnerVolumeSpecName "kube-api-access-7hq78". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.439095 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "0b5ff1d1-6330-4077-9b2d-dbec926d9e8c" (UID: "0b5ff1d1-6330-4077-9b2d-dbec926d9e8c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.439650 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-pod-info" (OuterVolumeSpecName: "pod-info") pod "0b5ff1d1-6330-4077-9b2d-dbec926d9e8c" (UID: "0b5ff1d1-6330-4077-9b2d-dbec926d9e8c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.479274 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-config-data" (OuterVolumeSpecName: "config-data") pod "0b5ff1d1-6330-4077-9b2d-dbec926d9e8c" (UID: "0b5ff1d1-6330-4077-9b2d-dbec926d9e8c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.489144 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-server-conf" (OuterVolumeSpecName: "server-conf") pod "0b5ff1d1-6330-4077-9b2d-dbec926d9e8c" (UID: "0b5ff1d1-6330-4077-9b2d-dbec926d9e8c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.534082 4689 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.534117 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hq78\" (UniqueName: \"kubernetes.io/projected/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-kube-api-access-7hq78\") on node \"crc\" DevicePath \"\"" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.534131 4689 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.534167 4689 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.534181 4689 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-server-conf\") on node \"crc\" DevicePath \"\"" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.534192 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.534202 4689 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-pod-info\") on node \"crc\" DevicePath \"\"" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.534213 4689 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.534223 4689 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.545301 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0b5ff1d1-6330-4077-9b2d-dbec926d9e8c" (UID: "0b5ff1d1-6330-4077-9b2d-dbec926d9e8c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.554946 4689 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.590462 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.635838 4689 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.635870 4689 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.718946 4689 generic.go:334] "Generic (PLEG): container finished" podID="0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7" containerID="c21727641fd1939388f5d9881999152d430b56f45ce7d2acb5b82dd0073d6f59" exitCode=0 Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.719263 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tf7c9" event={"ID":"0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7","Type":"ContainerDied","Data":"c21727641fd1939388f5d9881999152d430b56f45ce7d2acb5b82dd0073d6f59"} Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.719290 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tf7c9" event={"ID":"0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7","Type":"ContainerStarted","Data":"1c3150e40e59132daa571e36da74fb3d0adb487a34515fc3e81884e0875d31b0"} Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.728771 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0b5ff1d1-6330-4077-9b2d-dbec926d9e8c","Type":"ContainerDied","Data":"7a94aba0a4e241ca026b2b9d965052c8f875b3827a7dae6a6ae527e6c97cba34"} Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.728819 4689 scope.go:117] "RemoveContainer" containerID="8507c9ed5d3258efd18afab788094f5fef6b1366795b00ac02cb6368d575d696" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.728841 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.782562 4689 scope.go:117] "RemoveContainer" containerID="1a805d013077dbeecc8200f54ac11b45f3ce841f3bdb61918011111fe65869ee" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.823038 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.846060 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.856094 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 12:39:51 crc kubenswrapper[4689]: E1210 12:39:51.856571 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b5ff1d1-6330-4077-9b2d-dbec926d9e8c" containerName="setup-container" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.856593 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b5ff1d1-6330-4077-9b2d-dbec926d9e8c" containerName="setup-container" Dec 10 12:39:51 crc kubenswrapper[4689]: E1210 12:39:51.856624 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b5ff1d1-6330-4077-9b2d-dbec926d9e8c" containerName="rabbitmq" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.856631 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b5ff1d1-6330-4077-9b2d-dbec926d9e8c" containerName="rabbitmq" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.856845 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b5ff1d1-6330-4077-9b2d-dbec926d9e8c" containerName="rabbitmq" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.858007 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.877730 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.877771 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.877872 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.877951 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.878100 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.878316 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.878443 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-5ps56" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.884016 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.943876 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.944065 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/400622a6-9529-4712-85a9-03f48e2b1819-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.944099 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/400622a6-9529-4712-85a9-03f48e2b1819-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.944153 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/400622a6-9529-4712-85a9-03f48e2b1819-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.944185 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frqct\" (UniqueName: \"kubernetes.io/projected/400622a6-9529-4712-85a9-03f48e2b1819-kube-api-access-frqct\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.944345 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/400622a6-9529-4712-85a9-03f48e2b1819-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.944415 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/400622a6-9529-4712-85a9-03f48e2b1819-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.944432 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/400622a6-9529-4712-85a9-03f48e2b1819-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.944590 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/400622a6-9529-4712-85a9-03f48e2b1819-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.944675 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/400622a6-9529-4712-85a9-03f48e2b1819-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:51 crc kubenswrapper[4689]: I1210 12:39:51.944734 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/400622a6-9529-4712-85a9-03f48e2b1819-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:52 crc kubenswrapper[4689]: I1210 12:39:52.049398 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/400622a6-9529-4712-85a9-03f48e2b1819-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:52 crc kubenswrapper[4689]: I1210 12:39:52.049456 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/400622a6-9529-4712-85a9-03f48e2b1819-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:52 crc kubenswrapper[4689]: I1210 12:39:52.049480 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/400622a6-9529-4712-85a9-03f48e2b1819-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:52 crc kubenswrapper[4689]: I1210 12:39:52.049526 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:52 crc kubenswrapper[4689]: I1210 12:39:52.049567 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/400622a6-9529-4712-85a9-03f48e2b1819-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:52 crc kubenswrapper[4689]: I1210 12:39:52.049584 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/400622a6-9529-4712-85a9-03f48e2b1819-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:52 crc kubenswrapper[4689]: I1210 12:39:52.049600 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/400622a6-9529-4712-85a9-03f48e2b1819-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:52 crc kubenswrapper[4689]: I1210 12:39:52.049620 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frqct\" (UniqueName: \"kubernetes.io/projected/400622a6-9529-4712-85a9-03f48e2b1819-kube-api-access-frqct\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:52 crc kubenswrapper[4689]: I1210 12:39:52.049664 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/400622a6-9529-4712-85a9-03f48e2b1819-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:52 crc kubenswrapper[4689]: I1210 12:39:52.049689 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/400622a6-9529-4712-85a9-03f48e2b1819-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:52 crc kubenswrapper[4689]: I1210 12:39:52.049708 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/400622a6-9529-4712-85a9-03f48e2b1819-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:52 crc kubenswrapper[4689]: I1210 12:39:52.050256 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/400622a6-9529-4712-85a9-03f48e2b1819-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:52 crc kubenswrapper[4689]: I1210 12:39:52.051471 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/400622a6-9529-4712-85a9-03f48e2b1819-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:52 crc kubenswrapper[4689]: I1210 12:39:52.051887 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/400622a6-9529-4712-85a9-03f48e2b1819-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:52 crc kubenswrapper[4689]: I1210 12:39:52.052735 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/400622a6-9529-4712-85a9-03f48e2b1819-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:52 crc kubenswrapper[4689]: I1210 12:39:52.053102 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:52 crc kubenswrapper[4689]: I1210 12:39:52.053554 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/400622a6-9529-4712-85a9-03f48e2b1819-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:52 crc kubenswrapper[4689]: I1210 12:39:52.059037 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/400622a6-9529-4712-85a9-03f48e2b1819-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:52 crc kubenswrapper[4689]: I1210 12:39:52.059209 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/400622a6-9529-4712-85a9-03f48e2b1819-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:52 crc kubenswrapper[4689]: I1210 12:39:52.059247 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/400622a6-9529-4712-85a9-03f48e2b1819-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:52 crc kubenswrapper[4689]: I1210 12:39:52.059744 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/400622a6-9529-4712-85a9-03f48e2b1819-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:52 crc kubenswrapper[4689]: I1210 12:39:52.072083 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frqct\" (UniqueName: \"kubernetes.io/projected/400622a6-9529-4712-85a9-03f48e2b1819-kube-api-access-frqct\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:52 crc kubenswrapper[4689]: I1210 12:39:52.090347 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 12:39:52 crc kubenswrapper[4689]: I1210 12:39:52.112205 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"400622a6-9529-4712-85a9-03f48e2b1819\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:52 crc kubenswrapper[4689]: I1210 12:39:52.181741 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:39:52 crc kubenswrapper[4689]: I1210 12:39:52.511684 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b5ff1d1-6330-4077-9b2d-dbec926d9e8c" path="/var/lib/kubelet/pods/0b5ff1d1-6330-4077-9b2d-dbec926d9e8c/volumes" Dec 10 12:39:52 crc kubenswrapper[4689]: I1210 12:39:52.513041 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33bee83d-eb0f-4e5e-9617-f8102008436a" path="/var/lib/kubelet/pods/33bee83d-eb0f-4e5e-9617-f8102008436a/volumes" Dec 10 12:39:52 crc kubenswrapper[4689]: I1210 12:39:52.619717 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 12:39:52 crc kubenswrapper[4689]: W1210 12:39:52.623211 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod400622a6_9529_4712_85a9_03f48e2b1819.slice/crio-52342b7fe4e7d2655c7252529d29e7a0c679fb2025c5c4bf916ff12defa9a66f WatchSource:0}: Error finding container 52342b7fe4e7d2655c7252529d29e7a0c679fb2025c5c4bf916ff12defa9a66f: Status 404 returned error can't find the container with id 52342b7fe4e7d2655c7252529d29e7a0c679fb2025c5c4bf916ff12defa9a66f Dec 10 12:39:52 crc kubenswrapper[4689]: I1210 12:39:52.740328 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"148aec72-272a-46b6-a75f-46dc2b680101","Type":"ContainerStarted","Data":"7b8e153f8e8b480bdd7da52764e2f9b45ce1af7a6cd28705b6efedc4a6b3cb73"} Dec 10 12:39:52 crc kubenswrapper[4689]: I1210 12:39:52.741797 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"400622a6-9529-4712-85a9-03f48e2b1819","Type":"ContainerStarted","Data":"52342b7fe4e7d2655c7252529d29e7a0c679fb2025c5c4bf916ff12defa9a66f"} Dec 10 12:39:52 crc kubenswrapper[4689]: I1210 12:39:52.744230 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tf7c9" event={"ID":"0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7","Type":"ContainerStarted","Data":"3243dbf64fc6ff5ab03d3e23ffbacb397db5e294e7db6ccc06450a2584330d48"} Dec 10 12:39:53 crc kubenswrapper[4689]: I1210 12:39:53.765379 4689 generic.go:334] "Generic (PLEG): container finished" podID="0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7" containerID="3243dbf64fc6ff5ab03d3e23ffbacb397db5e294e7db6ccc06450a2584330d48" exitCode=0 Dec 10 12:39:53 crc kubenswrapper[4689]: I1210 12:39:53.765443 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tf7c9" event={"ID":"0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7","Type":"ContainerDied","Data":"3243dbf64fc6ff5ab03d3e23ffbacb397db5e294e7db6ccc06450a2584330d48"} Dec 10 12:39:54 crc kubenswrapper[4689]: I1210 12:39:54.805026 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tf7c9" event={"ID":"0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7","Type":"ContainerStarted","Data":"e8709804594927535a168cf81e86f7dc3b1daa4bdccbaeeab3da90635833f2ba"} Dec 10 12:39:54 crc kubenswrapper[4689]: I1210 12:39:54.817363 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"148aec72-272a-46b6-a75f-46dc2b680101","Type":"ContainerStarted","Data":"530480b8827cbc48bb02eae1260986bfa5c794f1b5e9ed6d4e3904a7c27116df"} Dec 10 12:39:54 crc kubenswrapper[4689]: I1210 12:39:54.820666 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"400622a6-9529-4712-85a9-03f48e2b1819","Type":"ContainerStarted","Data":"def4240376ed526166a8fbbc2b4bcd2324b80830fddea1ca44d521e40646d83e"} Dec 10 12:39:54 crc kubenswrapper[4689]: I1210 12:39:54.839417 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tf7c9" podStartSLOduration=3.19759559 podStartE2EDuration="5.83938597s" podCreationTimestamp="2025-12-10 12:39:49 +0000 UTC" firstStartedPulling="2025-12-10 12:39:51.721162301 +0000 UTC m=+1459.509243449" lastFinishedPulling="2025-12-10 12:39:54.362952671 +0000 UTC m=+1462.151033829" observedRunningTime="2025-12-10 12:39:54.828510162 +0000 UTC m=+1462.616591310" watchObservedRunningTime="2025-12-10 12:39:54.83938597 +0000 UTC m=+1462.627467148" Dec 10 12:40:00 crc kubenswrapper[4689]: I1210 12:40:00.371630 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tf7c9" Dec 10 12:40:00 crc kubenswrapper[4689]: I1210 12:40:00.372571 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tf7c9" Dec 10 12:40:00 crc kubenswrapper[4689]: I1210 12:40:00.459966 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tf7c9" Dec 10 12:40:00 crc kubenswrapper[4689]: I1210 12:40:00.982316 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tf7c9" Dec 10 12:40:01 crc kubenswrapper[4689]: I1210 12:40:01.058917 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tf7c9"] Dec 10 12:40:02 crc kubenswrapper[4689]: I1210 12:40:02.912781 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tf7c9" podUID="0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7" containerName="registry-server" containerID="cri-o://e8709804594927535a168cf81e86f7dc3b1daa4bdccbaeeab3da90635833f2ba" gracePeriod=2 Dec 10 12:40:03 crc kubenswrapper[4689]: I1210 12:40:03.929198 4689 generic.go:334] "Generic (PLEG): container finished" podID="0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7" containerID="e8709804594927535a168cf81e86f7dc3b1daa4bdccbaeeab3da90635833f2ba" exitCode=0 Dec 10 12:40:03 crc kubenswrapper[4689]: I1210 12:40:03.929292 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tf7c9" event={"ID":"0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7","Type":"ContainerDied","Data":"e8709804594927535a168cf81e86f7dc3b1daa4bdccbaeeab3da90635833f2ba"} Dec 10 12:40:04 crc kubenswrapper[4689]: I1210 12:40:04.641171 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tf7c9" Dec 10 12:40:04 crc kubenswrapper[4689]: I1210 12:40:04.811518 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4m8s\" (UniqueName: \"kubernetes.io/projected/0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7-kube-api-access-c4m8s\") pod \"0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7\" (UID: \"0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7\") " Dec 10 12:40:04 crc kubenswrapper[4689]: I1210 12:40:04.811604 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7-utilities\") pod \"0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7\" (UID: \"0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7\") " Dec 10 12:40:04 crc kubenswrapper[4689]: I1210 12:40:04.811851 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7-catalog-content\") pod \"0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7\" (UID: \"0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7\") " Dec 10 12:40:04 crc kubenswrapper[4689]: I1210 12:40:04.813103 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7-utilities" (OuterVolumeSpecName: "utilities") pod "0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7" (UID: "0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:40:04 crc kubenswrapper[4689]: I1210 12:40:04.821218 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7-kube-api-access-c4m8s" (OuterVolumeSpecName: "kube-api-access-c4m8s") pod "0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7" (UID: "0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7"). InnerVolumeSpecName "kube-api-access-c4m8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:40:04 crc kubenswrapper[4689]: I1210 12:40:04.889891 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7" (UID: "0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:40:04 crc kubenswrapper[4689]: I1210 12:40:04.914239 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4m8s\" (UniqueName: \"kubernetes.io/projected/0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7-kube-api-access-c4m8s\") on node \"crc\" DevicePath \"\"" Dec 10 12:40:04 crc kubenswrapper[4689]: I1210 12:40:04.914283 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 12:40:04 crc kubenswrapper[4689]: I1210 12:40:04.914297 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 12:40:04 crc kubenswrapper[4689]: I1210 12:40:04.954079 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tf7c9" event={"ID":"0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7","Type":"ContainerDied","Data":"1c3150e40e59132daa571e36da74fb3d0adb487a34515fc3e81884e0875d31b0"} Dec 10 12:40:04 crc kubenswrapper[4689]: I1210 12:40:04.954124 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tf7c9" Dec 10 12:40:04 crc kubenswrapper[4689]: I1210 12:40:04.954132 4689 scope.go:117] "RemoveContainer" containerID="e8709804594927535a168cf81e86f7dc3b1daa4bdccbaeeab3da90635833f2ba" Dec 10 12:40:04 crc kubenswrapper[4689]: I1210 12:40:04.990875 4689 scope.go:117] "RemoveContainer" containerID="3243dbf64fc6ff5ab03d3e23ffbacb397db5e294e7db6ccc06450a2584330d48" Dec 10 12:40:04 crc kubenswrapper[4689]: I1210 12:40:04.995833 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tf7c9"] Dec 10 12:40:05 crc kubenswrapper[4689]: I1210 12:40:05.004472 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tf7c9"] Dec 10 12:40:05 crc kubenswrapper[4689]: I1210 12:40:05.014713 4689 scope.go:117] "RemoveContainer" containerID="c21727641fd1939388f5d9881999152d430b56f45ce7d2acb5b82dd0073d6f59" Dec 10 12:40:06 crc kubenswrapper[4689]: I1210 12:40:06.517874 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7" path="/var/lib/kubelet/pods/0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7/volumes" Dec 10 12:40:07 crc kubenswrapper[4689]: I1210 12:40:07.166955 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 12:40:07 crc kubenswrapper[4689]: I1210 12:40:07.167477 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 12:40:27 crc kubenswrapper[4689]: I1210 12:40:27.238740 4689 generic.go:334] "Generic (PLEG): container finished" podID="148aec72-272a-46b6-a75f-46dc2b680101" containerID="530480b8827cbc48bb02eae1260986bfa5c794f1b5e9ed6d4e3904a7c27116df" exitCode=0 Dec 10 12:40:27 crc kubenswrapper[4689]: I1210 12:40:27.239609 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"148aec72-272a-46b6-a75f-46dc2b680101","Type":"ContainerDied","Data":"530480b8827cbc48bb02eae1260986bfa5c794f1b5e9ed6d4e3904a7c27116df"} Dec 10 12:40:28 crc kubenswrapper[4689]: I1210 12:40:28.271094 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"148aec72-272a-46b6-a75f-46dc2b680101","Type":"ContainerStarted","Data":"e04fb673133fedf84caf0c319cd22d73411256893633266c88c148c35b111133"} Dec 10 12:40:28 crc kubenswrapper[4689]: I1210 12:40:28.273625 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 10 12:40:28 crc kubenswrapper[4689]: I1210 12:40:28.275163 4689 generic.go:334] "Generic (PLEG): container finished" podID="400622a6-9529-4712-85a9-03f48e2b1819" containerID="def4240376ed526166a8fbbc2b4bcd2324b80830fddea1ca44d521e40646d83e" exitCode=0 Dec 10 12:40:28 crc kubenswrapper[4689]: I1210 12:40:28.275222 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"400622a6-9529-4712-85a9-03f48e2b1819","Type":"ContainerDied","Data":"def4240376ed526166a8fbbc2b4bcd2324b80830fddea1ca44d521e40646d83e"} Dec 10 12:40:28 crc kubenswrapper[4689]: I1210 12:40:28.329170 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.32914571 podStartE2EDuration="37.32914571s" podCreationTimestamp="2025-12-10 12:39:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:40:28.307620817 +0000 UTC m=+1496.095702035" watchObservedRunningTime="2025-12-10 12:40:28.32914571 +0000 UTC m=+1496.117226878" Dec 10 12:40:29 crc kubenswrapper[4689]: I1210 12:40:29.288110 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"400622a6-9529-4712-85a9-03f48e2b1819","Type":"ContainerStarted","Data":"dfe80606f887367695a1f221faeb888c5c3823d493337725f54678e8d2e320a6"} Dec 10 12:40:29 crc kubenswrapper[4689]: I1210 12:40:29.289281 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:40:29 crc kubenswrapper[4689]: I1210 12:40:29.313209 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.313189643 podStartE2EDuration="38.313189643s" podCreationTimestamp="2025-12-10 12:39:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:40:29.309554824 +0000 UTC m=+1497.097635952" watchObservedRunningTime="2025-12-10 12:40:29.313189643 +0000 UTC m=+1497.101270781" Dec 10 12:40:37 crc kubenswrapper[4689]: I1210 12:40:37.166728 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 12:40:37 crc kubenswrapper[4689]: I1210 12:40:37.167431 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 12:40:39 crc kubenswrapper[4689]: I1210 12:40:39.020615 4689 scope.go:117] "RemoveContainer" containerID="a7a94454aca86d87588c91f3bc9dad5ca16c502eedd24ea6900927523612e925" Dec 10 12:40:41 crc kubenswrapper[4689]: I1210 12:40:41.594495 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 10 12:40:42 crc kubenswrapper[4689]: I1210 12:40:42.186496 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 10 12:41:07 crc kubenswrapper[4689]: I1210 12:41:07.167092 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 12:41:07 crc kubenswrapper[4689]: I1210 12:41:07.167852 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 12:41:07 crc kubenswrapper[4689]: I1210 12:41:07.167927 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" Dec 10 12:41:07 crc kubenswrapper[4689]: I1210 12:41:07.169408 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"12c9249e05008e8640dad38e14c40fbe87bf771f5c2f12128bca16301c4173e9"} pod="openshift-machine-config-operator/machine-config-daemon-db6zk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 12:41:07 crc kubenswrapper[4689]: I1210 12:41:07.169556 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" containerID="cri-o://12c9249e05008e8640dad38e14c40fbe87bf771f5c2f12128bca16301c4173e9" gracePeriod=600 Dec 10 12:41:07 crc kubenswrapper[4689]: E1210 12:41:07.307664 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:41:07 crc kubenswrapper[4689]: I1210 12:41:07.793573 4689 generic.go:334] "Generic (PLEG): container finished" podID="a41ebdcd-910f-4669-992d-296e1a92162b" containerID="12c9249e05008e8640dad38e14c40fbe87bf771f5c2f12128bca16301c4173e9" exitCode=0 Dec 10 12:41:07 crc kubenswrapper[4689]: I1210 12:41:07.793672 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" event={"ID":"a41ebdcd-910f-4669-992d-296e1a92162b","Type":"ContainerDied","Data":"12c9249e05008e8640dad38e14c40fbe87bf771f5c2f12128bca16301c4173e9"} Dec 10 12:41:07 crc kubenswrapper[4689]: I1210 12:41:07.794055 4689 scope.go:117] "RemoveContainer" containerID="c7f561a90578da7003727a817e3d47870ce7681703176f0fc2d8b1240ef4ff23" Dec 10 12:41:07 crc kubenswrapper[4689]: I1210 12:41:07.795419 4689 scope.go:117] "RemoveContainer" containerID="12c9249e05008e8640dad38e14c40fbe87bf771f5c2f12128bca16301c4173e9" Dec 10 12:41:07 crc kubenswrapper[4689]: E1210 12:41:07.796009 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:41:21 crc kubenswrapper[4689]: I1210 12:41:21.498683 4689 scope.go:117] "RemoveContainer" containerID="12c9249e05008e8640dad38e14c40fbe87bf771f5c2f12128bca16301c4173e9" Dec 10 12:41:21 crc kubenswrapper[4689]: E1210 12:41:21.499404 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:41:34 crc kubenswrapper[4689]: I1210 12:41:34.499153 4689 scope.go:117] "RemoveContainer" containerID="12c9249e05008e8640dad38e14c40fbe87bf771f5c2f12128bca16301c4173e9" Dec 10 12:41:34 crc kubenswrapper[4689]: E1210 12:41:34.500445 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:41:39 crc kubenswrapper[4689]: I1210 12:41:39.172776 4689 scope.go:117] "RemoveContainer" containerID="f9689f9eb16ba1dce9b55f50f147a30ce8eee94b0207142f44e57ecee48d07a2" Dec 10 12:41:39 crc kubenswrapper[4689]: I1210 12:41:39.203383 4689 scope.go:117] "RemoveContainer" containerID="926623c39956cbbb50fcf8167fa246964bf21edafad9db44a292a6bf55cf7ddd" Dec 10 12:41:39 crc kubenswrapper[4689]: I1210 12:41:39.245626 4689 scope.go:117] "RemoveContainer" containerID="2b112f11d8b5b0a0943a19cfd5aaec6931880a4b031de63c920685a7efebd132" Dec 10 12:41:39 crc kubenswrapper[4689]: I1210 12:41:39.291703 4689 scope.go:117] "RemoveContainer" containerID="2e8dfe7fa3f93de94be7de4bab8e155d46b77a3c80c4d446fa3be93312ae8715" Dec 10 12:41:39 crc kubenswrapper[4689]: I1210 12:41:39.314716 4689 scope.go:117] "RemoveContainer" containerID="000713094db390ebb45420ebab5003c479fcbd14372fd96c24a71e5573905369" Dec 10 12:41:39 crc kubenswrapper[4689]: I1210 12:41:39.337200 4689 scope.go:117] "RemoveContainer" containerID="954224d40b7f240c0cce06b7020134601928aad1e5d6674925099e62f2668576" Dec 10 12:41:39 crc kubenswrapper[4689]: I1210 12:41:39.364254 4689 scope.go:117] "RemoveContainer" containerID="5ac0afc4326b546594b5c3642b4fd2287a1df9e69c249162c16348cdd2e33149" Dec 10 12:41:39 crc kubenswrapper[4689]: I1210 12:41:39.388238 4689 scope.go:117] "RemoveContainer" containerID="92bfe216c34d0444743b1a4d4ac2c63fc1c79aee442fc3fb497aac04e0c193f6" Dec 10 12:41:47 crc kubenswrapper[4689]: I1210 12:41:47.499205 4689 scope.go:117] "RemoveContainer" containerID="12c9249e05008e8640dad38e14c40fbe87bf771f5c2f12128bca16301c4173e9" Dec 10 12:41:47 crc kubenswrapper[4689]: E1210 12:41:47.500417 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:42:02 crc kubenswrapper[4689]: I1210 12:42:02.515258 4689 scope.go:117] "RemoveContainer" containerID="12c9249e05008e8640dad38e14c40fbe87bf771f5c2f12128bca16301c4173e9" Dec 10 12:42:02 crc kubenswrapper[4689]: E1210 12:42:02.516038 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:42:07 crc kubenswrapper[4689]: I1210 12:42:07.160550 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lfrv4"] Dec 10 12:42:07 crc kubenswrapper[4689]: E1210 12:42:07.161528 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7" containerName="registry-server" Dec 10 12:42:07 crc kubenswrapper[4689]: I1210 12:42:07.161547 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7" containerName="registry-server" Dec 10 12:42:07 crc kubenswrapper[4689]: E1210 12:42:07.161564 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7" containerName="extract-utilities" Dec 10 12:42:07 crc kubenswrapper[4689]: I1210 12:42:07.161571 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7" containerName="extract-utilities" Dec 10 12:42:07 crc kubenswrapper[4689]: E1210 12:42:07.161597 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7" containerName="extract-content" Dec 10 12:42:07 crc kubenswrapper[4689]: I1210 12:42:07.161605 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7" containerName="extract-content" Dec 10 12:42:07 crc kubenswrapper[4689]: I1210 12:42:07.161869 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dbcff6f-67a3-4cea-ade6-ca2dee73c8a7" containerName="registry-server" Dec 10 12:42:07 crc kubenswrapper[4689]: I1210 12:42:07.163596 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lfrv4" Dec 10 12:42:07 crc kubenswrapper[4689]: I1210 12:42:07.190146 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lfrv4"] Dec 10 12:42:07 crc kubenswrapper[4689]: I1210 12:42:07.275416 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzklh\" (UniqueName: \"kubernetes.io/projected/491a128d-7c21-4405-a5f8-6cf782f46c1d-kube-api-access-hzklh\") pod \"certified-operators-lfrv4\" (UID: \"491a128d-7c21-4405-a5f8-6cf782f46c1d\") " pod="openshift-marketplace/certified-operators-lfrv4" Dec 10 12:42:07 crc kubenswrapper[4689]: I1210 12:42:07.275785 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/491a128d-7c21-4405-a5f8-6cf782f46c1d-catalog-content\") pod \"certified-operators-lfrv4\" (UID: \"491a128d-7c21-4405-a5f8-6cf782f46c1d\") " pod="openshift-marketplace/certified-operators-lfrv4" Dec 10 12:42:07 crc kubenswrapper[4689]: I1210 12:42:07.275811 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/491a128d-7c21-4405-a5f8-6cf782f46c1d-utilities\") pod \"certified-operators-lfrv4\" (UID: \"491a128d-7c21-4405-a5f8-6cf782f46c1d\") " pod="openshift-marketplace/certified-operators-lfrv4" Dec 10 12:42:07 crc kubenswrapper[4689]: I1210 12:42:07.377863 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/491a128d-7c21-4405-a5f8-6cf782f46c1d-catalog-content\") pod \"certified-operators-lfrv4\" (UID: \"491a128d-7c21-4405-a5f8-6cf782f46c1d\") " pod="openshift-marketplace/certified-operators-lfrv4" Dec 10 12:42:07 crc kubenswrapper[4689]: I1210 12:42:07.377937 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/491a128d-7c21-4405-a5f8-6cf782f46c1d-utilities\") pod \"certified-operators-lfrv4\" (UID: \"491a128d-7c21-4405-a5f8-6cf782f46c1d\") " pod="openshift-marketplace/certified-operators-lfrv4" Dec 10 12:42:07 crc kubenswrapper[4689]: I1210 12:42:07.378098 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzklh\" (UniqueName: \"kubernetes.io/projected/491a128d-7c21-4405-a5f8-6cf782f46c1d-kube-api-access-hzklh\") pod \"certified-operators-lfrv4\" (UID: \"491a128d-7c21-4405-a5f8-6cf782f46c1d\") " pod="openshift-marketplace/certified-operators-lfrv4" Dec 10 12:42:07 crc kubenswrapper[4689]: I1210 12:42:07.378443 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/491a128d-7c21-4405-a5f8-6cf782f46c1d-catalog-content\") pod \"certified-operators-lfrv4\" (UID: \"491a128d-7c21-4405-a5f8-6cf782f46c1d\") " pod="openshift-marketplace/certified-operators-lfrv4" Dec 10 12:42:07 crc kubenswrapper[4689]: I1210 12:42:07.378518 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/491a128d-7c21-4405-a5f8-6cf782f46c1d-utilities\") pod \"certified-operators-lfrv4\" (UID: \"491a128d-7c21-4405-a5f8-6cf782f46c1d\") " pod="openshift-marketplace/certified-operators-lfrv4" Dec 10 12:42:07 crc kubenswrapper[4689]: I1210 12:42:07.398031 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzklh\" (UniqueName: \"kubernetes.io/projected/491a128d-7c21-4405-a5f8-6cf782f46c1d-kube-api-access-hzklh\") pod \"certified-operators-lfrv4\" (UID: \"491a128d-7c21-4405-a5f8-6cf782f46c1d\") " pod="openshift-marketplace/certified-operators-lfrv4" Dec 10 12:42:07 crc kubenswrapper[4689]: I1210 12:42:07.494244 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lfrv4" Dec 10 12:42:08 crc kubenswrapper[4689]: W1210 12:42:08.422646 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod491a128d_7c21_4405_a5f8_6cf782f46c1d.slice/crio-94ada5358c93e7b4a7669363f4dbd67e3660223834fa86b2086d117c13c26be6 WatchSource:0}: Error finding container 94ada5358c93e7b4a7669363f4dbd67e3660223834fa86b2086d117c13c26be6: Status 404 returned error can't find the container with id 94ada5358c93e7b4a7669363f4dbd67e3660223834fa86b2086d117c13c26be6 Dec 10 12:42:08 crc kubenswrapper[4689]: I1210 12:42:08.425099 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lfrv4"] Dec 10 12:42:08 crc kubenswrapper[4689]: I1210 12:42:08.531151 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfrv4" event={"ID":"491a128d-7c21-4405-a5f8-6cf782f46c1d","Type":"ContainerStarted","Data":"94ada5358c93e7b4a7669363f4dbd67e3660223834fa86b2086d117c13c26be6"} Dec 10 12:42:09 crc kubenswrapper[4689]: I1210 12:42:09.532125 4689 generic.go:334] "Generic (PLEG): container finished" podID="491a128d-7c21-4405-a5f8-6cf782f46c1d" containerID="9eaa25090d672e97cefdb74e5a1a7ecf3c84e1d47df194ff72fdc799d01d0b80" exitCode=0 Dec 10 12:42:09 crc kubenswrapper[4689]: I1210 12:42:09.532194 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfrv4" event={"ID":"491a128d-7c21-4405-a5f8-6cf782f46c1d","Type":"ContainerDied","Data":"9eaa25090d672e97cefdb74e5a1a7ecf3c84e1d47df194ff72fdc799d01d0b80"} Dec 10 12:42:11 crc kubenswrapper[4689]: I1210 12:42:11.552241 4689 generic.go:334] "Generic (PLEG): container finished" podID="491a128d-7c21-4405-a5f8-6cf782f46c1d" containerID="6e09892fe96c9ee587ee51d39bbb28b56743ec58f55e6b39d0d5873dc57a9b01" exitCode=0 Dec 10 12:42:11 crc kubenswrapper[4689]: I1210 12:42:11.552293 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfrv4" event={"ID":"491a128d-7c21-4405-a5f8-6cf782f46c1d","Type":"ContainerDied","Data":"6e09892fe96c9ee587ee51d39bbb28b56743ec58f55e6b39d0d5873dc57a9b01"} Dec 10 12:42:12 crc kubenswrapper[4689]: I1210 12:42:12.564256 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfrv4" event={"ID":"491a128d-7c21-4405-a5f8-6cf782f46c1d","Type":"ContainerStarted","Data":"1d56695cfe2230d69dad0cd8624292c2ad514ddb2718631205baf8ce7bb45e92"} Dec 10 12:42:12 crc kubenswrapper[4689]: I1210 12:42:12.591257 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lfrv4" podStartSLOduration=2.957797995 podStartE2EDuration="5.591240902s" podCreationTimestamp="2025-12-10 12:42:07 +0000 UTC" firstStartedPulling="2025-12-10 12:42:09.534642204 +0000 UTC m=+1597.322723342" lastFinishedPulling="2025-12-10 12:42:12.168085101 +0000 UTC m=+1599.956166249" observedRunningTime="2025-12-10 12:42:12.582167176 +0000 UTC m=+1600.370248314" watchObservedRunningTime="2025-12-10 12:42:12.591240902 +0000 UTC m=+1600.379322030" Dec 10 12:42:13 crc kubenswrapper[4689]: I1210 12:42:13.498285 4689 scope.go:117] "RemoveContainer" containerID="12c9249e05008e8640dad38e14c40fbe87bf771f5c2f12128bca16301c4173e9" Dec 10 12:42:13 crc kubenswrapper[4689]: E1210 12:42:13.498860 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:42:17 crc kubenswrapper[4689]: I1210 12:42:17.494613 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lfrv4" Dec 10 12:42:17 crc kubenswrapper[4689]: I1210 12:42:17.495009 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lfrv4" Dec 10 12:42:17 crc kubenswrapper[4689]: I1210 12:42:17.557785 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lfrv4" Dec 10 12:42:17 crc kubenswrapper[4689]: I1210 12:42:17.658861 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lfrv4" Dec 10 12:42:17 crc kubenswrapper[4689]: I1210 12:42:17.799931 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lfrv4"] Dec 10 12:42:19 crc kubenswrapper[4689]: I1210 12:42:19.634841 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lfrv4" podUID="491a128d-7c21-4405-a5f8-6cf782f46c1d" containerName="registry-server" containerID="cri-o://1d56695cfe2230d69dad0cd8624292c2ad514ddb2718631205baf8ce7bb45e92" gracePeriod=2 Dec 10 12:42:20 crc kubenswrapper[4689]: I1210 12:42:20.101015 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lfrv4" Dec 10 12:42:20 crc kubenswrapper[4689]: I1210 12:42:20.214986 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/491a128d-7c21-4405-a5f8-6cf782f46c1d-utilities\") pod \"491a128d-7c21-4405-a5f8-6cf782f46c1d\" (UID: \"491a128d-7c21-4405-a5f8-6cf782f46c1d\") " Dec 10 12:42:20 crc kubenswrapper[4689]: I1210 12:42:20.215159 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/491a128d-7c21-4405-a5f8-6cf782f46c1d-catalog-content\") pod \"491a128d-7c21-4405-a5f8-6cf782f46c1d\" (UID: \"491a128d-7c21-4405-a5f8-6cf782f46c1d\") " Dec 10 12:42:20 crc kubenswrapper[4689]: I1210 12:42:20.215198 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzklh\" (UniqueName: \"kubernetes.io/projected/491a128d-7c21-4405-a5f8-6cf782f46c1d-kube-api-access-hzklh\") pod \"491a128d-7c21-4405-a5f8-6cf782f46c1d\" (UID: \"491a128d-7c21-4405-a5f8-6cf782f46c1d\") " Dec 10 12:42:20 crc kubenswrapper[4689]: I1210 12:42:20.216157 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/491a128d-7c21-4405-a5f8-6cf782f46c1d-utilities" (OuterVolumeSpecName: "utilities") pod "491a128d-7c21-4405-a5f8-6cf782f46c1d" (UID: "491a128d-7c21-4405-a5f8-6cf782f46c1d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:42:20 crc kubenswrapper[4689]: I1210 12:42:20.239782 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/491a128d-7c21-4405-a5f8-6cf782f46c1d-kube-api-access-hzklh" (OuterVolumeSpecName: "kube-api-access-hzklh") pod "491a128d-7c21-4405-a5f8-6cf782f46c1d" (UID: "491a128d-7c21-4405-a5f8-6cf782f46c1d"). InnerVolumeSpecName "kube-api-access-hzklh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:42:20 crc kubenswrapper[4689]: I1210 12:42:20.265058 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/491a128d-7c21-4405-a5f8-6cf782f46c1d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "491a128d-7c21-4405-a5f8-6cf782f46c1d" (UID: "491a128d-7c21-4405-a5f8-6cf782f46c1d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:42:20 crc kubenswrapper[4689]: I1210 12:42:20.317333 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/491a128d-7c21-4405-a5f8-6cf782f46c1d-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 12:42:20 crc kubenswrapper[4689]: I1210 12:42:20.317371 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/491a128d-7c21-4405-a5f8-6cf782f46c1d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 12:42:20 crc kubenswrapper[4689]: I1210 12:42:20.317383 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzklh\" (UniqueName: \"kubernetes.io/projected/491a128d-7c21-4405-a5f8-6cf782f46c1d-kube-api-access-hzklh\") on node \"crc\" DevicePath \"\"" Dec 10 12:42:20 crc kubenswrapper[4689]: I1210 12:42:20.645692 4689 generic.go:334] "Generic (PLEG): container finished" podID="491a128d-7c21-4405-a5f8-6cf782f46c1d" containerID="1d56695cfe2230d69dad0cd8624292c2ad514ddb2718631205baf8ce7bb45e92" exitCode=0 Dec 10 12:42:20 crc kubenswrapper[4689]: I1210 12:42:20.645754 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lfrv4" Dec 10 12:42:20 crc kubenswrapper[4689]: I1210 12:42:20.645772 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfrv4" event={"ID":"491a128d-7c21-4405-a5f8-6cf782f46c1d","Type":"ContainerDied","Data":"1d56695cfe2230d69dad0cd8624292c2ad514ddb2718631205baf8ce7bb45e92"} Dec 10 12:42:20 crc kubenswrapper[4689]: I1210 12:42:20.646170 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfrv4" event={"ID":"491a128d-7c21-4405-a5f8-6cf782f46c1d","Type":"ContainerDied","Data":"94ada5358c93e7b4a7669363f4dbd67e3660223834fa86b2086d117c13c26be6"} Dec 10 12:42:20 crc kubenswrapper[4689]: I1210 12:42:20.646192 4689 scope.go:117] "RemoveContainer" containerID="1d56695cfe2230d69dad0cd8624292c2ad514ddb2718631205baf8ce7bb45e92" Dec 10 12:42:20 crc kubenswrapper[4689]: I1210 12:42:20.675377 4689 scope.go:117] "RemoveContainer" containerID="6e09892fe96c9ee587ee51d39bbb28b56743ec58f55e6b39d0d5873dc57a9b01" Dec 10 12:42:20 crc kubenswrapper[4689]: I1210 12:42:20.677090 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lfrv4"] Dec 10 12:42:20 crc kubenswrapper[4689]: I1210 12:42:20.697134 4689 scope.go:117] "RemoveContainer" containerID="9eaa25090d672e97cefdb74e5a1a7ecf3c84e1d47df194ff72fdc799d01d0b80" Dec 10 12:42:20 crc kubenswrapper[4689]: I1210 12:42:20.702536 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lfrv4"] Dec 10 12:42:20 crc kubenswrapper[4689]: I1210 12:42:20.750038 4689 scope.go:117] "RemoveContainer" containerID="1d56695cfe2230d69dad0cd8624292c2ad514ddb2718631205baf8ce7bb45e92" Dec 10 12:42:20 crc kubenswrapper[4689]: E1210 12:42:20.750484 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d56695cfe2230d69dad0cd8624292c2ad514ddb2718631205baf8ce7bb45e92\": container with ID starting with 1d56695cfe2230d69dad0cd8624292c2ad514ddb2718631205baf8ce7bb45e92 not found: ID does not exist" containerID="1d56695cfe2230d69dad0cd8624292c2ad514ddb2718631205baf8ce7bb45e92" Dec 10 12:42:20 crc kubenswrapper[4689]: I1210 12:42:20.750538 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d56695cfe2230d69dad0cd8624292c2ad514ddb2718631205baf8ce7bb45e92"} err="failed to get container status \"1d56695cfe2230d69dad0cd8624292c2ad514ddb2718631205baf8ce7bb45e92\": rpc error: code = NotFound desc = could not find container \"1d56695cfe2230d69dad0cd8624292c2ad514ddb2718631205baf8ce7bb45e92\": container with ID starting with 1d56695cfe2230d69dad0cd8624292c2ad514ddb2718631205baf8ce7bb45e92 not found: ID does not exist" Dec 10 12:42:20 crc kubenswrapper[4689]: I1210 12:42:20.750570 4689 scope.go:117] "RemoveContainer" containerID="6e09892fe96c9ee587ee51d39bbb28b56743ec58f55e6b39d0d5873dc57a9b01" Dec 10 12:42:20 crc kubenswrapper[4689]: E1210 12:42:20.750876 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e09892fe96c9ee587ee51d39bbb28b56743ec58f55e6b39d0d5873dc57a9b01\": container with ID starting with 6e09892fe96c9ee587ee51d39bbb28b56743ec58f55e6b39d0d5873dc57a9b01 not found: ID does not exist" containerID="6e09892fe96c9ee587ee51d39bbb28b56743ec58f55e6b39d0d5873dc57a9b01" Dec 10 12:42:20 crc kubenswrapper[4689]: I1210 12:42:20.750918 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e09892fe96c9ee587ee51d39bbb28b56743ec58f55e6b39d0d5873dc57a9b01"} err="failed to get container status \"6e09892fe96c9ee587ee51d39bbb28b56743ec58f55e6b39d0d5873dc57a9b01\": rpc error: code = NotFound desc = could not find container \"6e09892fe96c9ee587ee51d39bbb28b56743ec58f55e6b39d0d5873dc57a9b01\": container with ID starting with 6e09892fe96c9ee587ee51d39bbb28b56743ec58f55e6b39d0d5873dc57a9b01 not found: ID does not exist" Dec 10 12:42:20 crc kubenswrapper[4689]: I1210 12:42:20.750954 4689 scope.go:117] "RemoveContainer" containerID="9eaa25090d672e97cefdb74e5a1a7ecf3c84e1d47df194ff72fdc799d01d0b80" Dec 10 12:42:20 crc kubenswrapper[4689]: E1210 12:42:20.751231 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9eaa25090d672e97cefdb74e5a1a7ecf3c84e1d47df194ff72fdc799d01d0b80\": container with ID starting with 9eaa25090d672e97cefdb74e5a1a7ecf3c84e1d47df194ff72fdc799d01d0b80 not found: ID does not exist" containerID="9eaa25090d672e97cefdb74e5a1a7ecf3c84e1d47df194ff72fdc799d01d0b80" Dec 10 12:42:20 crc kubenswrapper[4689]: I1210 12:42:20.751263 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eaa25090d672e97cefdb74e5a1a7ecf3c84e1d47df194ff72fdc799d01d0b80"} err="failed to get container status \"9eaa25090d672e97cefdb74e5a1a7ecf3c84e1d47df194ff72fdc799d01d0b80\": rpc error: code = NotFound desc = could not find container \"9eaa25090d672e97cefdb74e5a1a7ecf3c84e1d47df194ff72fdc799d01d0b80\": container with ID starting with 9eaa25090d672e97cefdb74e5a1a7ecf3c84e1d47df194ff72fdc799d01d0b80 not found: ID does not exist" Dec 10 12:42:22 crc kubenswrapper[4689]: I1210 12:42:22.509438 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="491a128d-7c21-4405-a5f8-6cf782f46c1d" path="/var/lib/kubelet/pods/491a128d-7c21-4405-a5f8-6cf782f46c1d/volumes" Dec 10 12:42:24 crc kubenswrapper[4689]: I1210 12:42:24.501597 4689 scope.go:117] "RemoveContainer" containerID="12c9249e05008e8640dad38e14c40fbe87bf771f5c2f12128bca16301c4173e9" Dec 10 12:42:24 crc kubenswrapper[4689]: E1210 12:42:24.502189 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:42:36 crc kubenswrapper[4689]: I1210 12:42:36.498720 4689 scope.go:117] "RemoveContainer" containerID="12c9249e05008e8640dad38e14c40fbe87bf771f5c2f12128bca16301c4173e9" Dec 10 12:42:36 crc kubenswrapper[4689]: E1210 12:42:36.499631 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:42:51 crc kubenswrapper[4689]: I1210 12:42:51.497965 4689 scope.go:117] "RemoveContainer" containerID="12c9249e05008e8640dad38e14c40fbe87bf771f5c2f12128bca16301c4173e9" Dec 10 12:42:51 crc kubenswrapper[4689]: E1210 12:42:51.498566 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:43:04 crc kubenswrapper[4689]: I1210 12:43:04.498897 4689 scope.go:117] "RemoveContainer" containerID="12c9249e05008e8640dad38e14c40fbe87bf771f5c2f12128bca16301c4173e9" Dec 10 12:43:04 crc kubenswrapper[4689]: E1210 12:43:04.499702 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:43:18 crc kubenswrapper[4689]: I1210 12:43:18.498955 4689 scope.go:117] "RemoveContainer" containerID="12c9249e05008e8640dad38e14c40fbe87bf771f5c2f12128bca16301c4173e9" Dec 10 12:43:18 crc kubenswrapper[4689]: E1210 12:43:18.499862 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:43:29 crc kubenswrapper[4689]: I1210 12:43:29.498508 4689 scope.go:117] "RemoveContainer" containerID="12c9249e05008e8640dad38e14c40fbe87bf771f5c2f12128bca16301c4173e9" Dec 10 12:43:29 crc kubenswrapper[4689]: E1210 12:43:29.499426 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:43:39 crc kubenswrapper[4689]: I1210 12:43:39.557540 4689 scope.go:117] "RemoveContainer" containerID="415fa72de64abba3b2e25fddfeed48f2e20d39f48fe962c9f5a5d9ba77ab24ae" Dec 10 12:43:39 crc kubenswrapper[4689]: I1210 12:43:39.616909 4689 scope.go:117] "RemoveContainer" containerID="ed7eec532d857d7b5ab6b165edf17db6598b1bb0893f029614e7b90217c227ac" Dec 10 12:43:39 crc kubenswrapper[4689]: I1210 12:43:39.642457 4689 scope.go:117] "RemoveContainer" containerID="256f6d4b99b62b9d777833314af7c5e3614c45ffb55cf1bc7e37a33caccc041c" Dec 10 12:43:39 crc kubenswrapper[4689]: I1210 12:43:39.665618 4689 scope.go:117] "RemoveContainer" containerID="3d93864792f60c4460d29bf6d581a81648bfa45f8e5bcf0f0c5dcf612475d11c" Dec 10 12:43:41 crc kubenswrapper[4689]: I1210 12:43:41.497725 4689 scope.go:117] "RemoveContainer" containerID="12c9249e05008e8640dad38e14c40fbe87bf771f5c2f12128bca16301c4173e9" Dec 10 12:43:41 crc kubenswrapper[4689]: E1210 12:43:41.499272 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:43:55 crc kubenswrapper[4689]: I1210 12:43:55.498484 4689 scope.go:117] "RemoveContainer" containerID="12c9249e05008e8640dad38e14c40fbe87bf771f5c2f12128bca16301c4173e9" Dec 10 12:43:55 crc kubenswrapper[4689]: E1210 12:43:55.499200 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:44:06 crc kubenswrapper[4689]: I1210 12:44:06.499379 4689 scope.go:117] "RemoveContainer" containerID="12c9249e05008e8640dad38e14c40fbe87bf771f5c2f12128bca16301c4173e9" Dec 10 12:44:06 crc kubenswrapper[4689]: E1210 12:44:06.500465 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:44:09 crc kubenswrapper[4689]: I1210 12:44:09.079499 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-m9tln"] Dec 10 12:44:09 crc kubenswrapper[4689]: I1210 12:44:09.101030 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-5b3e-account-create-update-s689x"] Dec 10 12:44:09 crc kubenswrapper[4689]: I1210 12:44:09.141467 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-m9tln"] Dec 10 12:44:09 crc kubenswrapper[4689]: I1210 12:44:09.143680 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-5b3e-account-create-update-s689x"] Dec 10 12:44:10 crc kubenswrapper[4689]: I1210 12:44:10.509010 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63315531-f260-4b52-ad96-ea4d24185d13" path="/var/lib/kubelet/pods/63315531-f260-4b52-ad96-ea4d24185d13/volumes" Dec 10 12:44:10 crc kubenswrapper[4689]: I1210 12:44:10.510031 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1ed5623-54eb-4955-800d-273d08df144a" path="/var/lib/kubelet/pods/c1ed5623-54eb-4955-800d-273d08df144a/volumes" Dec 10 12:44:14 crc kubenswrapper[4689]: I1210 12:44:14.060793 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-v4mw7"] Dec 10 12:44:14 crc kubenswrapper[4689]: I1210 12:44:14.071845 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-v4mw7"] Dec 10 12:44:14 crc kubenswrapper[4689]: I1210 12:44:14.510194 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dcd6604-d6d4-4147-ab88-eefb780a33b4" path="/var/lib/kubelet/pods/6dcd6604-d6d4-4147-ab88-eefb780a33b4/volumes" Dec 10 12:44:15 crc kubenswrapper[4689]: I1210 12:44:15.029494 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5bcc-account-create-update-c6df9"] Dec 10 12:44:15 crc kubenswrapper[4689]: I1210 12:44:15.039810 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-5bcc-account-create-update-c6df9"] Dec 10 12:44:16 crc kubenswrapper[4689]: I1210 12:44:16.035934 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-jkpzl"] Dec 10 12:44:16 crc kubenswrapper[4689]: I1210 12:44:16.045507 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-78a1-account-create-update-5l2cc"] Dec 10 12:44:16 crc kubenswrapper[4689]: I1210 12:44:16.058125 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-78a1-account-create-update-5l2cc"] Dec 10 12:44:16 crc kubenswrapper[4689]: I1210 12:44:16.065946 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-jkpzl"] Dec 10 12:44:16 crc kubenswrapper[4689]: I1210 12:44:16.521727 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6369f699-9491-404b-beab-9bb964b73037" path="/var/lib/kubelet/pods/6369f699-9491-404b-beab-9bb964b73037/volumes" Dec 10 12:44:16 crc kubenswrapper[4689]: I1210 12:44:16.523749 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c" path="/var/lib/kubelet/pods/7d4e9756-5327-4bfd-8ac6-b4bb2b08e97c/volumes" Dec 10 12:44:16 crc kubenswrapper[4689]: I1210 12:44:16.525707 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed" path="/var/lib/kubelet/pods/8fe98dc6-3d5b-47b7-a728-2dca96d4e4ed/volumes" Dec 10 12:44:20 crc kubenswrapper[4689]: I1210 12:44:20.510956 4689 scope.go:117] "RemoveContainer" containerID="12c9249e05008e8640dad38e14c40fbe87bf771f5c2f12128bca16301c4173e9" Dec 10 12:44:20 crc kubenswrapper[4689]: E1210 12:44:20.511986 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:44:31 crc kubenswrapper[4689]: I1210 12:44:31.497888 4689 scope.go:117] "RemoveContainer" containerID="12c9249e05008e8640dad38e14c40fbe87bf771f5c2f12128bca16301c4173e9" Dec 10 12:44:31 crc kubenswrapper[4689]: E1210 12:44:31.499066 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:44:33 crc kubenswrapper[4689]: I1210 12:44:33.038851 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-44zkc"] Dec 10 12:44:33 crc kubenswrapper[4689]: I1210 12:44:33.080271 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-44zkc"] Dec 10 12:44:34 crc kubenswrapper[4689]: I1210 12:44:34.507834 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18511d72-4b0d-401d-aa20-6cbf2b26abc6" path="/var/lib/kubelet/pods/18511d72-4b0d-401d-aa20-6cbf2b26abc6/volumes" Dec 10 12:44:39 crc kubenswrapper[4689]: I1210 12:44:39.724786 4689 scope.go:117] "RemoveContainer" containerID="d015a13979549958c203081f1fbd0743fdca07f838d191a9650aa288e63bf049" Dec 10 12:44:39 crc kubenswrapper[4689]: I1210 12:44:39.746299 4689 scope.go:117] "RemoveContainer" containerID="6dc1a7557da0a3e61b7fa63fbb1d322d95962aa27ab79c266214f0b276052f2d" Dec 10 12:44:39 crc kubenswrapper[4689]: I1210 12:44:39.801679 4689 scope.go:117] "RemoveContainer" containerID="b0b9531bb554fe5c472d19985ae9d3e7d8db240c2e2461b36beaad6931a1333d" Dec 10 12:44:39 crc kubenswrapper[4689]: I1210 12:44:39.853163 4689 scope.go:117] "RemoveContainer" containerID="7066a13b2bd8f18a60b91cb716c51d10f236e4d4f171341e012ff7beacce10d0" Dec 10 12:44:39 crc kubenswrapper[4689]: I1210 12:44:39.897660 4689 scope.go:117] "RemoveContainer" containerID="634bc2a679f5fc4e6cb4c88d24e3c29578e9708775c4804f6b82f1d2676f2e78" Dec 10 12:44:39 crc kubenswrapper[4689]: I1210 12:44:39.940184 4689 scope.go:117] "RemoveContainer" containerID="24701f6ac8b863177de073f38488473596a039970623ed8cc5a548dcf2680c48" Dec 10 12:44:39 crc kubenswrapper[4689]: I1210 12:44:39.981497 4689 scope.go:117] "RemoveContainer" containerID="9c2c50920fbc2c8656874b4baa8d02921db449f822ca5835a71bee20b9de126b" Dec 10 12:44:41 crc kubenswrapper[4689]: I1210 12:44:41.050405 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-6h5md"] Dec 10 12:44:41 crc kubenswrapper[4689]: I1210 12:44:41.060119 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-9t4ht"] Dec 10 12:44:41 crc kubenswrapper[4689]: I1210 12:44:41.070100 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-kr898"] Dec 10 12:44:41 crc kubenswrapper[4689]: I1210 12:44:41.081360 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-kr898"] Dec 10 12:44:41 crc kubenswrapper[4689]: I1210 12:44:41.092140 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-3216-account-create-update-m6phc"] Dec 10 12:44:41 crc kubenswrapper[4689]: I1210 12:44:41.119048 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-6h5md"] Dec 10 12:44:41 crc kubenswrapper[4689]: I1210 12:44:41.133806 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-9t4ht"] Dec 10 12:44:41 crc kubenswrapper[4689]: I1210 12:44:41.144208 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-3216-account-create-update-m6phc"] Dec 10 12:44:42 crc kubenswrapper[4689]: I1210 12:44:42.509836 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10d3bc49-4f1e-4e47-b49c-ebda98a43aa5" path="/var/lib/kubelet/pods/10d3bc49-4f1e-4e47-b49c-ebda98a43aa5/volumes" Dec 10 12:44:42 crc kubenswrapper[4689]: I1210 12:44:42.510640 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36da9d71-f25c-4a4b-96b7-439c4f96bda8" path="/var/lib/kubelet/pods/36da9d71-f25c-4a4b-96b7-439c4f96bda8/volumes" Dec 10 12:44:42 crc kubenswrapper[4689]: I1210 12:44:42.511192 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="387ed17c-05e1-4311-853d-cae57b4bdbec" path="/var/lib/kubelet/pods/387ed17c-05e1-4311-853d-cae57b4bdbec/volumes" Dec 10 12:44:42 crc kubenswrapper[4689]: I1210 12:44:42.511711 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1fa6986-f4e1-4b99-8def-9f9017c41cb7" path="/var/lib/kubelet/pods/d1fa6986-f4e1-4b99-8def-9f9017c41cb7/volumes" Dec 10 12:44:45 crc kubenswrapper[4689]: I1210 12:44:45.049874 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-b5f6-account-create-update-scpqx"] Dec 10 12:44:45 crc kubenswrapper[4689]: I1210 12:44:45.060844 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-9e51-account-create-update-pdw2b"] Dec 10 12:44:45 crc kubenswrapper[4689]: I1210 12:44:45.072498 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-b5f6-account-create-update-scpqx"] Dec 10 12:44:45 crc kubenswrapper[4689]: I1210 12:44:45.081146 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-9e51-account-create-update-pdw2b"] Dec 10 12:44:45 crc kubenswrapper[4689]: I1210 12:44:45.498128 4689 scope.go:117] "RemoveContainer" containerID="12c9249e05008e8640dad38e14c40fbe87bf771f5c2f12128bca16301c4173e9" Dec 10 12:44:45 crc kubenswrapper[4689]: E1210 12:44:45.498423 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:44:46 crc kubenswrapper[4689]: I1210 12:44:46.512638 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="006043c5-508c-44c9-9b45-73756a05c173" path="/var/lib/kubelet/pods/006043c5-508c-44c9-9b45-73756a05c173/volumes" Dec 10 12:44:46 crc kubenswrapper[4689]: I1210 12:44:46.513884 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98a0a6c5-b397-41af-8c50-6b1c662515e0" path="/var/lib/kubelet/pods/98a0a6c5-b397-41af-8c50-6b1c662515e0/volumes" Dec 10 12:44:51 crc kubenswrapper[4689]: I1210 12:44:51.030613 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-brgtz"] Dec 10 12:44:51 crc kubenswrapper[4689]: I1210 12:44:51.041448 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-brgtz"] Dec 10 12:44:52 crc kubenswrapper[4689]: I1210 12:44:52.509599 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8336932-b518-4fd5-8a96-895454de5855" path="/var/lib/kubelet/pods/b8336932-b518-4fd5-8a96-895454de5855/volumes" Dec 10 12:44:59 crc kubenswrapper[4689]: I1210 12:44:59.042381 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-32f3-account-create-update-qwdgk"] Dec 10 12:44:59 crc kubenswrapper[4689]: I1210 12:44:59.050962 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-db-create-c5fbw"] Dec 10 12:44:59 crc kubenswrapper[4689]: I1210 12:44:59.059214 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-32f3-account-create-update-qwdgk"] Dec 10 12:44:59 crc kubenswrapper[4689]: I1210 12:44:59.068803 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-db-create-c5fbw"] Dec 10 12:44:59 crc kubenswrapper[4689]: I1210 12:44:59.498485 4689 scope.go:117] "RemoveContainer" containerID="12c9249e05008e8640dad38e14c40fbe87bf771f5c2f12128bca16301c4173e9" Dec 10 12:44:59 crc kubenswrapper[4689]: E1210 12:44:59.498898 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:45:00 crc kubenswrapper[4689]: I1210 12:45:00.144206 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422845-nkchr"] Dec 10 12:45:00 crc kubenswrapper[4689]: E1210 12:45:00.144677 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="491a128d-7c21-4405-a5f8-6cf782f46c1d" containerName="extract-utilities" Dec 10 12:45:00 crc kubenswrapper[4689]: I1210 12:45:00.144694 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="491a128d-7c21-4405-a5f8-6cf782f46c1d" containerName="extract-utilities" Dec 10 12:45:00 crc kubenswrapper[4689]: E1210 12:45:00.144714 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="491a128d-7c21-4405-a5f8-6cf782f46c1d" containerName="registry-server" Dec 10 12:45:00 crc kubenswrapper[4689]: I1210 12:45:00.144723 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="491a128d-7c21-4405-a5f8-6cf782f46c1d" containerName="registry-server" Dec 10 12:45:00 crc kubenswrapper[4689]: E1210 12:45:00.144734 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="491a128d-7c21-4405-a5f8-6cf782f46c1d" containerName="extract-content" Dec 10 12:45:00 crc kubenswrapper[4689]: I1210 12:45:00.144743 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="491a128d-7c21-4405-a5f8-6cf782f46c1d" containerName="extract-content" Dec 10 12:45:00 crc kubenswrapper[4689]: I1210 12:45:00.144990 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="491a128d-7c21-4405-a5f8-6cf782f46c1d" containerName="registry-server" Dec 10 12:45:00 crc kubenswrapper[4689]: I1210 12:45:00.145759 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422845-nkchr" Dec 10 12:45:00 crc kubenswrapper[4689]: I1210 12:45:00.147955 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 10 12:45:00 crc kubenswrapper[4689]: I1210 12:45:00.147988 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 10 12:45:00 crc kubenswrapper[4689]: I1210 12:45:00.156302 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422845-nkchr"] Dec 10 12:45:00 crc kubenswrapper[4689]: I1210 12:45:00.237840 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82df9750-fefd-4ecc-aca9-059d0db14d6b-secret-volume\") pod \"collect-profiles-29422845-nkchr\" (UID: \"82df9750-fefd-4ecc-aca9-059d0db14d6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422845-nkchr" Dec 10 12:45:00 crc kubenswrapper[4689]: I1210 12:45:00.237901 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82df9750-fefd-4ecc-aca9-059d0db14d6b-config-volume\") pod \"collect-profiles-29422845-nkchr\" (UID: \"82df9750-fefd-4ecc-aca9-059d0db14d6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422845-nkchr" Dec 10 12:45:00 crc kubenswrapper[4689]: I1210 12:45:00.237997 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmg54\" (UniqueName: \"kubernetes.io/projected/82df9750-fefd-4ecc-aca9-059d0db14d6b-kube-api-access-wmg54\") pod \"collect-profiles-29422845-nkchr\" (UID: \"82df9750-fefd-4ecc-aca9-059d0db14d6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422845-nkchr" Dec 10 12:45:00 crc kubenswrapper[4689]: I1210 12:45:00.339330 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmg54\" (UniqueName: \"kubernetes.io/projected/82df9750-fefd-4ecc-aca9-059d0db14d6b-kube-api-access-wmg54\") pod \"collect-profiles-29422845-nkchr\" (UID: \"82df9750-fefd-4ecc-aca9-059d0db14d6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422845-nkchr" Dec 10 12:45:00 crc kubenswrapper[4689]: I1210 12:45:00.339458 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82df9750-fefd-4ecc-aca9-059d0db14d6b-secret-volume\") pod \"collect-profiles-29422845-nkchr\" (UID: \"82df9750-fefd-4ecc-aca9-059d0db14d6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422845-nkchr" Dec 10 12:45:00 crc kubenswrapper[4689]: I1210 12:45:00.339481 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82df9750-fefd-4ecc-aca9-059d0db14d6b-config-volume\") pod \"collect-profiles-29422845-nkchr\" (UID: \"82df9750-fefd-4ecc-aca9-059d0db14d6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422845-nkchr" Dec 10 12:45:00 crc kubenswrapper[4689]: I1210 12:45:00.340229 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82df9750-fefd-4ecc-aca9-059d0db14d6b-config-volume\") pod \"collect-profiles-29422845-nkchr\" (UID: \"82df9750-fefd-4ecc-aca9-059d0db14d6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422845-nkchr" Dec 10 12:45:00 crc kubenswrapper[4689]: I1210 12:45:00.355341 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82df9750-fefd-4ecc-aca9-059d0db14d6b-secret-volume\") pod \"collect-profiles-29422845-nkchr\" (UID: \"82df9750-fefd-4ecc-aca9-059d0db14d6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422845-nkchr" Dec 10 12:45:00 crc kubenswrapper[4689]: I1210 12:45:00.359454 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmg54\" (UniqueName: \"kubernetes.io/projected/82df9750-fefd-4ecc-aca9-059d0db14d6b-kube-api-access-wmg54\") pod \"collect-profiles-29422845-nkchr\" (UID: \"82df9750-fefd-4ecc-aca9-059d0db14d6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422845-nkchr" Dec 10 12:45:00 crc kubenswrapper[4689]: I1210 12:45:00.465839 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422845-nkchr" Dec 10 12:45:00 crc kubenswrapper[4689]: I1210 12:45:00.510491 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a323df3-5a3b-41cb-afc0-cbd2e4933ec0" path="/var/lib/kubelet/pods/0a323df3-5a3b-41cb-afc0-cbd2e4933ec0/volumes" Dec 10 12:45:00 crc kubenswrapper[4689]: I1210 12:45:00.511311 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9880fdc4-4a6b-4353-9b05-fefd96248c09" path="/var/lib/kubelet/pods/9880fdc4-4a6b-4353-9b05-fefd96248c09/volumes" Dec 10 12:45:00 crc kubenswrapper[4689]: I1210 12:45:00.943148 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422845-nkchr"] Dec 10 12:45:01 crc kubenswrapper[4689]: I1210 12:45:01.347553 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422845-nkchr" event={"ID":"82df9750-fefd-4ecc-aca9-059d0db14d6b","Type":"ContainerStarted","Data":"a97055b6a7f60a6b47074c1c0b2e0c78ef28d9d0b4fe55cbb3758f1bf254bccb"} Dec 10 12:45:02 crc kubenswrapper[4689]: I1210 12:45:02.356082 4689 generic.go:334] "Generic (PLEG): container finished" podID="82df9750-fefd-4ecc-aca9-059d0db14d6b" containerID="e2d86ab9a210d7babfdd48c855047a0bdc5c1579a275a2e52ca0505a013d05fa" exitCode=0 Dec 10 12:45:02 crc kubenswrapper[4689]: I1210 12:45:02.356267 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422845-nkchr" event={"ID":"82df9750-fefd-4ecc-aca9-059d0db14d6b","Type":"ContainerDied","Data":"e2d86ab9a210d7babfdd48c855047a0bdc5c1579a275a2e52ca0505a013d05fa"} Dec 10 12:45:03 crc kubenswrapper[4689]: I1210 12:45:03.775800 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422845-nkchr" Dec 10 12:45:03 crc kubenswrapper[4689]: I1210 12:45:03.908928 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82df9750-fefd-4ecc-aca9-059d0db14d6b-secret-volume\") pod \"82df9750-fefd-4ecc-aca9-059d0db14d6b\" (UID: \"82df9750-fefd-4ecc-aca9-059d0db14d6b\") " Dec 10 12:45:03 crc kubenswrapper[4689]: I1210 12:45:03.909051 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82df9750-fefd-4ecc-aca9-059d0db14d6b-config-volume\") pod \"82df9750-fefd-4ecc-aca9-059d0db14d6b\" (UID: \"82df9750-fefd-4ecc-aca9-059d0db14d6b\") " Dec 10 12:45:03 crc kubenswrapper[4689]: I1210 12:45:03.909115 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmg54\" (UniqueName: \"kubernetes.io/projected/82df9750-fefd-4ecc-aca9-059d0db14d6b-kube-api-access-wmg54\") pod \"82df9750-fefd-4ecc-aca9-059d0db14d6b\" (UID: \"82df9750-fefd-4ecc-aca9-059d0db14d6b\") " Dec 10 12:45:03 crc kubenswrapper[4689]: I1210 12:45:03.911049 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82df9750-fefd-4ecc-aca9-059d0db14d6b-config-volume" (OuterVolumeSpecName: "config-volume") pod "82df9750-fefd-4ecc-aca9-059d0db14d6b" (UID: "82df9750-fefd-4ecc-aca9-059d0db14d6b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 12:45:03 crc kubenswrapper[4689]: I1210 12:45:03.916508 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82df9750-fefd-4ecc-aca9-059d0db14d6b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "82df9750-fefd-4ecc-aca9-059d0db14d6b" (UID: "82df9750-fefd-4ecc-aca9-059d0db14d6b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 12:45:03 crc kubenswrapper[4689]: I1210 12:45:03.916738 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82df9750-fefd-4ecc-aca9-059d0db14d6b-kube-api-access-wmg54" (OuterVolumeSpecName: "kube-api-access-wmg54") pod "82df9750-fefd-4ecc-aca9-059d0db14d6b" (UID: "82df9750-fefd-4ecc-aca9-059d0db14d6b"). InnerVolumeSpecName "kube-api-access-wmg54". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:45:04 crc kubenswrapper[4689]: I1210 12:45:04.011798 4689 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82df9750-fefd-4ecc-aca9-059d0db14d6b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 12:45:04 crc kubenswrapper[4689]: I1210 12:45:04.011846 4689 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82df9750-fefd-4ecc-aca9-059d0db14d6b-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 10 12:45:04 crc kubenswrapper[4689]: I1210 12:45:04.011865 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmg54\" (UniqueName: \"kubernetes.io/projected/82df9750-fefd-4ecc-aca9-059d0db14d6b-kube-api-access-wmg54\") on node \"crc\" DevicePath \"\"" Dec 10 12:45:04 crc kubenswrapper[4689]: I1210 12:45:04.377992 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422845-nkchr" event={"ID":"82df9750-fefd-4ecc-aca9-059d0db14d6b","Type":"ContainerDied","Data":"a97055b6a7f60a6b47074c1c0b2e0c78ef28d9d0b4fe55cbb3758f1bf254bccb"} Dec 10 12:45:04 crc kubenswrapper[4689]: I1210 12:45:04.378027 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a97055b6a7f60a6b47074c1c0b2e0c78ef28d9d0b4fe55cbb3758f1bf254bccb" Dec 10 12:45:04 crc kubenswrapper[4689]: I1210 12:45:04.378080 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422845-nkchr" Dec 10 12:45:14 crc kubenswrapper[4689]: I1210 12:45:14.498202 4689 scope.go:117] "RemoveContainer" containerID="12c9249e05008e8640dad38e14c40fbe87bf771f5c2f12128bca16301c4173e9" Dec 10 12:45:14 crc kubenswrapper[4689]: E1210 12:45:14.499031 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:45:23 crc kubenswrapper[4689]: I1210 12:45:23.044367 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-5vjjb"] Dec 10 12:45:23 crc kubenswrapper[4689]: I1210 12:45:23.058375 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-5vjjb"] Dec 10 12:45:24 crc kubenswrapper[4689]: I1210 12:45:24.509453 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0c1f5b6-e80c-4950-9b6e-181733099c57" path="/var/lib/kubelet/pods/c0c1f5b6-e80c-4950-9b6e-181733099c57/volumes" Dec 10 12:45:25 crc kubenswrapper[4689]: I1210 12:45:25.047214 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-dkrw7"] Dec 10 12:45:25 crc kubenswrapper[4689]: I1210 12:45:25.060435 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-dkrw7"] Dec 10 12:45:26 crc kubenswrapper[4689]: I1210 12:45:26.032437 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-7xjms"] Dec 10 12:45:26 crc kubenswrapper[4689]: I1210 12:45:26.042432 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-kb47t"] Dec 10 12:45:26 crc kubenswrapper[4689]: I1210 12:45:26.053337 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-kb47t"] Dec 10 12:45:26 crc kubenswrapper[4689]: I1210 12:45:26.061621 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-7xjms"] Dec 10 12:45:26 crc kubenswrapper[4689]: I1210 12:45:26.505202 4689 scope.go:117] "RemoveContainer" containerID="12c9249e05008e8640dad38e14c40fbe87bf771f5c2f12128bca16301c4173e9" Dec 10 12:45:26 crc kubenswrapper[4689]: E1210 12:45:26.505431 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:45:26 crc kubenswrapper[4689]: I1210 12:45:26.514428 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79d420d1-6ba7-4cf2-9e13-b046a65d378c" path="/var/lib/kubelet/pods/79d420d1-6ba7-4cf2-9e13-b046a65d378c/volumes" Dec 10 12:45:26 crc kubenswrapper[4689]: I1210 12:45:26.514999 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae5c2d84-08cd-462f-b8d5-ba416353f365" path="/var/lib/kubelet/pods/ae5c2d84-08cd-462f-b8d5-ba416353f365/volumes" Dec 10 12:45:26 crc kubenswrapper[4689]: I1210 12:45:26.515569 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e10d322a-3fb7-451d-9a38-f2659e3d32e5" path="/var/lib/kubelet/pods/e10d322a-3fb7-451d-9a38-f2659e3d32e5/volumes" Dec 10 12:45:39 crc kubenswrapper[4689]: I1210 12:45:39.032304 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-57ffb"] Dec 10 12:45:39 crc kubenswrapper[4689]: I1210 12:45:39.041786 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-57ffb"] Dec 10 12:45:40 crc kubenswrapper[4689]: I1210 12:45:40.134952 4689 scope.go:117] "RemoveContainer" containerID="183716ddac517cdb6ec9ef6c651e5b365b096e474296f7200711cbe660f28b56" Dec 10 12:45:40 crc kubenswrapper[4689]: I1210 12:45:40.157384 4689 scope.go:117] "RemoveContainer" containerID="05a0f473503937b36731f0677cfdebfd2c9dcd7883a18349c913d982b605df10" Dec 10 12:45:40 crc kubenswrapper[4689]: I1210 12:45:40.213324 4689 scope.go:117] "RemoveContainer" containerID="0548c773f6e6ae05b1579f3976f9c2253e028d6eee0257a53211b444bc340ae0" Dec 10 12:45:40 crc kubenswrapper[4689]: I1210 12:45:40.269034 4689 scope.go:117] "RemoveContainer" containerID="182e5847d4d581a645c0b6bcdbb1ce522d424a83fdd9ae24ea604dedec83ffd9" Dec 10 12:45:40 crc kubenswrapper[4689]: I1210 12:45:40.349025 4689 scope.go:117] "RemoveContainer" containerID="d38bf7b66608e6c3d7f782ed0b0e1e26001a4bbea79285f0ebc5819b4503ac2d" Dec 10 12:45:40 crc kubenswrapper[4689]: I1210 12:45:40.403012 4689 scope.go:117] "RemoveContainer" containerID="9da03645378f4c973322bd49c9e58d79583b7fc7d26fc3b956f141b25b1c401b" Dec 10 12:45:40 crc kubenswrapper[4689]: I1210 12:45:40.447831 4689 scope.go:117] "RemoveContainer" containerID="8c1f6cc7183b4deb370578c6dcd58cf3d165147b49fbbff79fa67f6462275a50" Dec 10 12:45:40 crc kubenswrapper[4689]: I1210 12:45:40.479773 4689 scope.go:117] "RemoveContainer" containerID="b77fe9cd6bbda252985911173368b42469ec7d4e39ef06da3ae67b3bb7072e5d" Dec 10 12:45:40 crc kubenswrapper[4689]: I1210 12:45:40.498454 4689 scope.go:117] "RemoveContainer" containerID="12c9249e05008e8640dad38e14c40fbe87bf771f5c2f12128bca16301c4173e9" Dec 10 12:45:40 crc kubenswrapper[4689]: E1210 12:45:40.498783 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:45:40 crc kubenswrapper[4689]: I1210 12:45:40.502140 4689 scope.go:117] "RemoveContainer" containerID="25b21492f606c454d60e2873c4318710c21760174d91f0adb977e3ce74ea3174" Dec 10 12:45:40 crc kubenswrapper[4689]: I1210 12:45:40.512320 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb62506f-d5a5-44b0-8da3-125128211e10" path="/var/lib/kubelet/pods/eb62506f-d5a5-44b0-8da3-125128211e10/volumes" Dec 10 12:45:40 crc kubenswrapper[4689]: I1210 12:45:40.533638 4689 scope.go:117] "RemoveContainer" containerID="38ee574efe9ad80ddcb9269b1b2c3ca75c2de074710a3a5f0909610f7224db31" Dec 10 12:45:40 crc kubenswrapper[4689]: I1210 12:45:40.573249 4689 scope.go:117] "RemoveContainer" containerID="8a94fd552ebe3de425adc247c9e27347dee701869c5a3835d40466b288871234" Dec 10 12:45:40 crc kubenswrapper[4689]: I1210 12:45:40.597296 4689 scope.go:117] "RemoveContainer" containerID="22dc2abd908f65340ee1be610486603b9c5c5f41097bb4b1aa066b4b65657941" Dec 10 12:45:40 crc kubenswrapper[4689]: I1210 12:45:40.631255 4689 scope.go:117] "RemoveContainer" containerID="e2654ed5d1c4887a4f741ead5fa8bf6b724ff716a69eee105059c04b4c509305" Dec 10 12:45:48 crc kubenswrapper[4689]: I1210 12:45:48.060011 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-db-create-zrpxl"] Dec 10 12:45:48 crc kubenswrapper[4689]: I1210 12:45:48.072696 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-c6a0-account-create-update-sfzxs"] Dec 10 12:45:48 crc kubenswrapper[4689]: I1210 12:45:48.082169 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-db-create-zrpxl"] Dec 10 12:45:48 crc kubenswrapper[4689]: I1210 12:45:48.090103 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-c6a0-account-create-update-sfzxs"] Dec 10 12:45:48 crc kubenswrapper[4689]: I1210 12:45:48.509092 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af27799e-026b-4c74-83f1-1336db02850f" path="/var/lib/kubelet/pods/af27799e-026b-4c74-83f1-1336db02850f/volumes" Dec 10 12:45:48 crc kubenswrapper[4689]: I1210 12:45:48.509942 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1ba82e4-cee0-4da9-a138-f860c8f1e274" path="/var/lib/kubelet/pods/d1ba82e4-cee0-4da9-a138-f860c8f1e274/volumes" Dec 10 12:45:55 crc kubenswrapper[4689]: I1210 12:45:55.498553 4689 scope.go:117] "RemoveContainer" containerID="12c9249e05008e8640dad38e14c40fbe87bf771f5c2f12128bca16301c4173e9" Dec 10 12:45:55 crc kubenswrapper[4689]: E1210 12:45:55.499230 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:46:06 crc kubenswrapper[4689]: I1210 12:46:06.498660 4689 scope.go:117] "RemoveContainer" containerID="12c9249e05008e8640dad38e14c40fbe87bf771f5c2f12128bca16301c4173e9" Dec 10 12:46:06 crc kubenswrapper[4689]: E1210 12:46:06.499440 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:46:20 crc kubenswrapper[4689]: I1210 12:46:20.499017 4689 scope.go:117] "RemoveContainer" containerID="12c9249e05008e8640dad38e14c40fbe87bf771f5c2f12128bca16301c4173e9" Dec 10 12:46:22 crc kubenswrapper[4689]: I1210 12:46:22.161642 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" event={"ID":"a41ebdcd-910f-4669-992d-296e1a92162b","Type":"ContainerStarted","Data":"5d1e2ae8645d18ab83e20a9f28e88007d921095b25a0a5bebc871640151a35c6"} Dec 10 12:46:40 crc kubenswrapper[4689]: I1210 12:46:40.866028 4689 scope.go:117] "RemoveContainer" containerID="c7ca18573d3037aceaba7ed840f7aed25cb41aefc67fd11389af7aa62e606c8b" Dec 10 12:46:40 crc kubenswrapper[4689]: I1210 12:46:40.913656 4689 scope.go:117] "RemoveContainer" containerID="87150d3b6da66440a345aee3c33cceae7adceb98fd9caa4d4dece11f23a91401" Dec 10 12:46:40 crc kubenswrapper[4689]: I1210 12:46:40.976438 4689 scope.go:117] "RemoveContainer" containerID="18b19fd36e5055305ff21b5ff5fefa1763ca3ef37008582f7691b33935a74f10" Dec 10 12:46:57 crc kubenswrapper[4689]: I1210 12:46:57.057411 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-7fsv5"] Dec 10 12:46:57 crc kubenswrapper[4689]: I1210 12:46:57.074874 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-wqhs6"] Dec 10 12:46:57 crc kubenswrapper[4689]: I1210 12:46:57.087131 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-7fsv5"] Dec 10 12:46:57 crc kubenswrapper[4689]: I1210 12:46:57.097970 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-wqhs6"] Dec 10 12:46:58 crc kubenswrapper[4689]: I1210 12:46:58.047386 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-35fe-account-create-update-9th5x"] Dec 10 12:46:58 crc kubenswrapper[4689]: I1210 12:46:58.062433 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-8ec7-account-create-update-bztj5"] Dec 10 12:46:58 crc kubenswrapper[4689]: I1210 12:46:58.070714 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-8ec7-account-create-update-bztj5"] Dec 10 12:46:58 crc kubenswrapper[4689]: I1210 12:46:58.080057 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-thck8"] Dec 10 12:46:58 crc kubenswrapper[4689]: I1210 12:46:58.088743 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-00e7-account-create-update-bbfqf"] Dec 10 12:46:58 crc kubenswrapper[4689]: I1210 12:46:58.096764 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-35fe-account-create-update-9th5x"] Dec 10 12:46:58 crc kubenswrapper[4689]: I1210 12:46:58.107442 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-00e7-account-create-update-bbfqf"] Dec 10 12:46:58 crc kubenswrapper[4689]: I1210 12:46:58.113763 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-thck8"] Dec 10 12:46:58 crc kubenswrapper[4689]: I1210 12:46:58.515808 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="271d3ee1-d2ae-41da-95bf-85a9c45dfad5" path="/var/lib/kubelet/pods/271d3ee1-d2ae-41da-95bf-85a9c45dfad5/volumes" Dec 10 12:46:58 crc kubenswrapper[4689]: I1210 12:46:58.516592 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c5ed288-dbbc-4c61-bd51-9c4b43375ad5" path="/var/lib/kubelet/pods/7c5ed288-dbbc-4c61-bd51-9c4b43375ad5/volumes" Dec 10 12:46:58 crc kubenswrapper[4689]: I1210 12:46:58.517244 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4d9408c-062f-45ad-a393-da20c66d7d40" path="/var/lib/kubelet/pods/c4d9408c-062f-45ad-a393-da20c66d7d40/volumes" Dec 10 12:46:58 crc kubenswrapper[4689]: I1210 12:46:58.517753 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfee700e-fe0c-4e0b-90f3-ee2741a787ba" path="/var/lib/kubelet/pods/cfee700e-fe0c-4e0b-90f3-ee2741a787ba/volumes" Dec 10 12:46:58 crc kubenswrapper[4689]: I1210 12:46:58.518788 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eddd02fa-d59c-407a-80d9-6dfe1066ac88" path="/var/lib/kubelet/pods/eddd02fa-d59c-407a-80d9-6dfe1066ac88/volumes" Dec 10 12:46:58 crc kubenswrapper[4689]: I1210 12:46:58.519334 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0eca891-5410-41d4-a578-61f77c7f5978" path="/var/lib/kubelet/pods/f0eca891-5410-41d4-a578-61f77c7f5978/volumes" Dec 10 12:47:25 crc kubenswrapper[4689]: I1210 12:47:25.063246 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-b6qlt"] Dec 10 12:47:25 crc kubenswrapper[4689]: I1210 12:47:25.078452 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-b6qlt"] Dec 10 12:47:26 crc kubenswrapper[4689]: I1210 12:47:26.522604 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e38a3466-6fce-418f-9db5-5da12b7fdf2b" path="/var/lib/kubelet/pods/e38a3466-6fce-418f-9db5-5da12b7fdf2b/volumes" Dec 10 12:47:40 crc kubenswrapper[4689]: I1210 12:47:40.762635 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-grfmr/must-gather-dckb7"] Dec 10 12:47:40 crc kubenswrapper[4689]: E1210 12:47:40.763708 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82df9750-fefd-4ecc-aca9-059d0db14d6b" containerName="collect-profiles" Dec 10 12:47:40 crc kubenswrapper[4689]: I1210 12:47:40.763724 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="82df9750-fefd-4ecc-aca9-059d0db14d6b" containerName="collect-profiles" Dec 10 12:47:40 crc kubenswrapper[4689]: I1210 12:47:40.766155 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="82df9750-fefd-4ecc-aca9-059d0db14d6b" containerName="collect-profiles" Dec 10 12:47:40 crc kubenswrapper[4689]: I1210 12:47:40.767659 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-grfmr/must-gather-dckb7" Dec 10 12:47:40 crc kubenswrapper[4689]: I1210 12:47:40.772991 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-grfmr"/"kube-root-ca.crt" Dec 10 12:47:40 crc kubenswrapper[4689]: I1210 12:47:40.773119 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-grfmr"/"openshift-service-ca.crt" Dec 10 12:47:40 crc kubenswrapper[4689]: I1210 12:47:40.791609 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-grfmr/must-gather-dckb7"] Dec 10 12:47:40 crc kubenswrapper[4689]: I1210 12:47:40.879335 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vvmt\" (UniqueName: \"kubernetes.io/projected/c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4-kube-api-access-6vvmt\") pod \"must-gather-dckb7\" (UID: \"c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4\") " pod="openshift-must-gather-grfmr/must-gather-dckb7" Dec 10 12:47:40 crc kubenswrapper[4689]: I1210 12:47:40.879442 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4-must-gather-output\") pod \"must-gather-dckb7\" (UID: \"c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4\") " pod="openshift-must-gather-grfmr/must-gather-dckb7" Dec 10 12:47:40 crc kubenswrapper[4689]: I1210 12:47:40.981302 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4-must-gather-output\") pod \"must-gather-dckb7\" (UID: \"c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4\") " pod="openshift-must-gather-grfmr/must-gather-dckb7" Dec 10 12:47:40 crc kubenswrapper[4689]: I1210 12:47:40.981448 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vvmt\" (UniqueName: \"kubernetes.io/projected/c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4-kube-api-access-6vvmt\") pod \"must-gather-dckb7\" (UID: \"c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4\") " pod="openshift-must-gather-grfmr/must-gather-dckb7" Dec 10 12:47:40 crc kubenswrapper[4689]: I1210 12:47:40.981760 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4-must-gather-output\") pod \"must-gather-dckb7\" (UID: \"c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4\") " pod="openshift-must-gather-grfmr/must-gather-dckb7" Dec 10 12:47:40 crc kubenswrapper[4689]: I1210 12:47:40.999500 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vvmt\" (UniqueName: \"kubernetes.io/projected/c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4-kube-api-access-6vvmt\") pod \"must-gather-dckb7\" (UID: \"c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4\") " pod="openshift-must-gather-grfmr/must-gather-dckb7" Dec 10 12:47:41 crc kubenswrapper[4689]: I1210 12:47:41.084138 4689 scope.go:117] "RemoveContainer" containerID="a1502c4cc82b2b45f333cab38f291ab541350233ba82944031ed0b75d522be6d" Dec 10 12:47:41 crc kubenswrapper[4689]: I1210 12:47:41.092819 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-grfmr/must-gather-dckb7" Dec 10 12:47:41 crc kubenswrapper[4689]: I1210 12:47:41.118157 4689 scope.go:117] "RemoveContainer" containerID="7f7c9c7ff56b974a505028f8d4910abdef7b1ffd336c2242a335396661d88a9f" Dec 10 12:47:41 crc kubenswrapper[4689]: I1210 12:47:41.150639 4689 scope.go:117] "RemoveContainer" containerID="57810c6ae168d22a71f107ae566ffb55109fc6f930a8c429f1714a6405afbff2" Dec 10 12:47:41 crc kubenswrapper[4689]: I1210 12:47:41.243517 4689 scope.go:117] "RemoveContainer" containerID="813d320b91fed1fb1d627b8792922d3200812224c046a96e9f30db6aa4ad4bac" Dec 10 12:47:41 crc kubenswrapper[4689]: I1210 12:47:41.284128 4689 scope.go:117] "RemoveContainer" containerID="dfe87742bc67dd4d06c3db6d35f94285d0a41d12dcb2f091758ee524927c9902" Dec 10 12:47:41 crc kubenswrapper[4689]: I1210 12:47:41.309013 4689 scope.go:117] "RemoveContainer" containerID="31f49edacda83a9151fe34977daae05f77299a944bc9cafa24da31cd6aee7f83" Dec 10 12:47:41 crc kubenswrapper[4689]: I1210 12:47:41.353786 4689 scope.go:117] "RemoveContainer" containerID="229c35ec4bf7af7fac173475a3b264bfe2e88214af207f1545af54ebdcbc9a20" Dec 10 12:47:41 crc kubenswrapper[4689]: I1210 12:47:41.590019 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-grfmr/must-gather-dckb7"] Dec 10 12:47:41 crc kubenswrapper[4689]: I1210 12:47:41.609039 4689 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 12:47:42 crc kubenswrapper[4689]: I1210 12:47:42.026638 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-grfmr/must-gather-dckb7" event={"ID":"c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4","Type":"ContainerStarted","Data":"27345904fd1eb9b87ed4656631d83cfa5387a074bc8b32fae6e13bd97cd9ebe0"} Dec 10 12:47:42 crc kubenswrapper[4689]: I1210 12:47:42.058843 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-p6g5l"] Dec 10 12:47:42 crc kubenswrapper[4689]: I1210 12:47:42.066636 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-p6g5l"] Dec 10 12:47:42 crc kubenswrapper[4689]: I1210 12:47:42.525755 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c20f1556-95fd-4488-bb5b-5dda218a55bf" path="/var/lib/kubelet/pods/c20f1556-95fd-4488-bb5b-5dda218a55bf/volumes" Dec 10 12:47:48 crc kubenswrapper[4689]: I1210 12:47:48.032939 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-njfxw"] Dec 10 12:47:48 crc kubenswrapper[4689]: I1210 12:47:48.043911 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-njfxw"] Dec 10 12:47:48 crc kubenswrapper[4689]: I1210 12:47:48.509107 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="099d539b-5f0e-41e2-b344-d7c10c52cf16" path="/var/lib/kubelet/pods/099d539b-5f0e-41e2-b344-d7c10c52cf16/volumes" Dec 10 12:47:49 crc kubenswrapper[4689]: I1210 12:47:49.087239 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-grfmr/must-gather-dckb7" event={"ID":"c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4","Type":"ContainerStarted","Data":"1d47a6c4bcb76e065def1fa1e8e7b3d8852d84a02090144f72d9fa2c2ddd352f"} Dec 10 12:47:50 crc kubenswrapper[4689]: I1210 12:47:50.104749 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-grfmr/must-gather-dckb7" event={"ID":"c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4","Type":"ContainerStarted","Data":"55cdc8c8dc6840a362fe6315f523ae05cf3bad1a23bcf4d825c9cd288c2da2d5"} Dec 10 12:47:50 crc kubenswrapper[4689]: I1210 12:47:50.131020 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-grfmr/must-gather-dckb7" podStartSLOduration=3.175278398 podStartE2EDuration="10.130948776s" podCreationTimestamp="2025-12-10 12:47:40 +0000 UTC" firstStartedPulling="2025-12-10 12:47:41.608985578 +0000 UTC m=+1929.397066716" lastFinishedPulling="2025-12-10 12:47:48.564655946 +0000 UTC m=+1936.352737094" observedRunningTime="2025-12-10 12:47:50.125729777 +0000 UTC m=+1937.913810935" watchObservedRunningTime="2025-12-10 12:47:50.130948776 +0000 UTC m=+1937.919029954" Dec 10 12:47:52 crc kubenswrapper[4689]: I1210 12:47:52.851926 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-grfmr/crc-debug-w4ntt"] Dec 10 12:47:52 crc kubenswrapper[4689]: I1210 12:47:52.853993 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-grfmr/crc-debug-w4ntt" Dec 10 12:47:52 crc kubenswrapper[4689]: I1210 12:47:52.857991 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-grfmr"/"default-dockercfg-w6xmg" Dec 10 12:47:53 crc kubenswrapper[4689]: I1210 12:47:53.040692 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e3e8f7bf-19af-4aaf-bdcf-64237347b964-host\") pod \"crc-debug-w4ntt\" (UID: \"e3e8f7bf-19af-4aaf-bdcf-64237347b964\") " pod="openshift-must-gather-grfmr/crc-debug-w4ntt" Dec 10 12:47:53 crc kubenswrapper[4689]: I1210 12:47:53.041169 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5v7l\" (UniqueName: \"kubernetes.io/projected/e3e8f7bf-19af-4aaf-bdcf-64237347b964-kube-api-access-t5v7l\") pod \"crc-debug-w4ntt\" (UID: \"e3e8f7bf-19af-4aaf-bdcf-64237347b964\") " pod="openshift-must-gather-grfmr/crc-debug-w4ntt" Dec 10 12:47:53 crc kubenswrapper[4689]: I1210 12:47:53.143038 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5v7l\" (UniqueName: \"kubernetes.io/projected/e3e8f7bf-19af-4aaf-bdcf-64237347b964-kube-api-access-t5v7l\") pod \"crc-debug-w4ntt\" (UID: \"e3e8f7bf-19af-4aaf-bdcf-64237347b964\") " pod="openshift-must-gather-grfmr/crc-debug-w4ntt" Dec 10 12:47:53 crc kubenswrapper[4689]: I1210 12:47:53.143149 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e3e8f7bf-19af-4aaf-bdcf-64237347b964-host\") pod \"crc-debug-w4ntt\" (UID: \"e3e8f7bf-19af-4aaf-bdcf-64237347b964\") " pod="openshift-must-gather-grfmr/crc-debug-w4ntt" Dec 10 12:47:53 crc kubenswrapper[4689]: I1210 12:47:53.143309 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e3e8f7bf-19af-4aaf-bdcf-64237347b964-host\") pod \"crc-debug-w4ntt\" (UID: \"e3e8f7bf-19af-4aaf-bdcf-64237347b964\") " pod="openshift-must-gather-grfmr/crc-debug-w4ntt" Dec 10 12:47:53 crc kubenswrapper[4689]: I1210 12:47:53.161832 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5v7l\" (UniqueName: \"kubernetes.io/projected/e3e8f7bf-19af-4aaf-bdcf-64237347b964-kube-api-access-t5v7l\") pod \"crc-debug-w4ntt\" (UID: \"e3e8f7bf-19af-4aaf-bdcf-64237347b964\") " pod="openshift-must-gather-grfmr/crc-debug-w4ntt" Dec 10 12:47:53 crc kubenswrapper[4689]: I1210 12:47:53.182222 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-grfmr/crc-debug-w4ntt" Dec 10 12:47:53 crc kubenswrapper[4689]: W1210 12:47:53.211161 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3e8f7bf_19af_4aaf_bdcf_64237347b964.slice/crio-0c08d59e2238c3772c5d2537e878f78dbac963b222790f43535cb1dd3786d0aa WatchSource:0}: Error finding container 0c08d59e2238c3772c5d2537e878f78dbac963b222790f43535cb1dd3786d0aa: Status 404 returned error can't find the container with id 0c08d59e2238c3772c5d2537e878f78dbac963b222790f43535cb1dd3786d0aa Dec 10 12:47:53 crc kubenswrapper[4689]: I1210 12:47:53.339591 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-grfmr/crc-debug-w4ntt" event={"ID":"e3e8f7bf-19af-4aaf-bdcf-64237347b964","Type":"ContainerStarted","Data":"0c08d59e2238c3772c5d2537e878f78dbac963b222790f43535cb1dd3786d0aa"} Dec 10 12:48:05 crc kubenswrapper[4689]: I1210 12:48:05.451302 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-grfmr/crc-debug-w4ntt" event={"ID":"e3e8f7bf-19af-4aaf-bdcf-64237347b964","Type":"ContainerStarted","Data":"8e720474e120ed4d2a724f804c4bebd73242bdd46209e08fdc216392caddd2b7"} Dec 10 12:48:05 crc kubenswrapper[4689]: I1210 12:48:05.468076 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-grfmr/crc-debug-w4ntt" podStartSLOduration=2.424303848 podStartE2EDuration="13.468058314s" podCreationTimestamp="2025-12-10 12:47:52 +0000 UTC" firstStartedPulling="2025-12-10 12:47:53.213312422 +0000 UTC m=+1941.001393560" lastFinishedPulling="2025-12-10 12:48:04.257066888 +0000 UTC m=+1952.045148026" observedRunningTime="2025-12-10 12:48:05.463480892 +0000 UTC m=+1953.251562030" watchObservedRunningTime="2025-12-10 12:48:05.468058314 +0000 UTC m=+1953.256139452" Dec 10 12:48:31 crc kubenswrapper[4689]: I1210 12:48:31.059814 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-stmjh"] Dec 10 12:48:31 crc kubenswrapper[4689]: I1210 12:48:31.074213 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-stmjh"] Dec 10 12:48:32 crc kubenswrapper[4689]: I1210 12:48:32.509676 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c9e105c-a2da-493c-ae4a-ad81b18dea23" path="/var/lib/kubelet/pods/7c9e105c-a2da-493c-ae4a-ad81b18dea23/volumes" Dec 10 12:48:37 crc kubenswrapper[4689]: I1210 12:48:37.166847 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 12:48:37 crc kubenswrapper[4689]: I1210 12:48:37.167435 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 12:48:41 crc kubenswrapper[4689]: I1210 12:48:41.620744 4689 scope.go:117] "RemoveContainer" containerID="fcfb339e401c5ebefdaa0ff873d1eb3d984c1f97a58c80b5051aa52851379d98" Dec 10 12:48:45 crc kubenswrapper[4689]: I1210 12:48:45.687880 4689 scope.go:117] "RemoveContainer" containerID="57464ded9ffcf5d191527e5861c612c4c133238d0605182b10fa483b262f6073" Dec 10 12:48:45 crc kubenswrapper[4689]: I1210 12:48:45.726048 4689 scope.go:117] "RemoveContainer" containerID="53c4522104a48e2b0bfb326bb109e251b4f74d748f88b1a36811480dc9728aba" Dec 10 12:49:07 crc kubenswrapper[4689]: I1210 12:49:07.167089 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 12:49:07 crc kubenswrapper[4689]: I1210 12:49:07.167511 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 12:49:15 crc kubenswrapper[4689]: I1210 12:49:15.165533 4689 generic.go:334] "Generic (PLEG): container finished" podID="e3e8f7bf-19af-4aaf-bdcf-64237347b964" containerID="8e720474e120ed4d2a724f804c4bebd73242bdd46209e08fdc216392caddd2b7" exitCode=0 Dec 10 12:49:15 crc kubenswrapper[4689]: I1210 12:49:15.166130 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-grfmr/crc-debug-w4ntt" event={"ID":"e3e8f7bf-19af-4aaf-bdcf-64237347b964","Type":"ContainerDied","Data":"8e720474e120ed4d2a724f804c4bebd73242bdd46209e08fdc216392caddd2b7"} Dec 10 12:49:16 crc kubenswrapper[4689]: I1210 12:49:16.273102 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-grfmr/crc-debug-w4ntt" Dec 10 12:49:16 crc kubenswrapper[4689]: I1210 12:49:16.307398 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-grfmr/crc-debug-w4ntt"] Dec 10 12:49:16 crc kubenswrapper[4689]: I1210 12:49:16.318931 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-grfmr/crc-debug-w4ntt"] Dec 10 12:49:16 crc kubenswrapper[4689]: I1210 12:49:16.463825 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e3e8f7bf-19af-4aaf-bdcf-64237347b964-host\") pod \"e3e8f7bf-19af-4aaf-bdcf-64237347b964\" (UID: \"e3e8f7bf-19af-4aaf-bdcf-64237347b964\") " Dec 10 12:49:16 crc kubenswrapper[4689]: I1210 12:49:16.464180 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5v7l\" (UniqueName: \"kubernetes.io/projected/e3e8f7bf-19af-4aaf-bdcf-64237347b964-kube-api-access-t5v7l\") pod \"e3e8f7bf-19af-4aaf-bdcf-64237347b964\" (UID: \"e3e8f7bf-19af-4aaf-bdcf-64237347b964\") " Dec 10 12:49:16 crc kubenswrapper[4689]: I1210 12:49:16.463907 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3e8f7bf-19af-4aaf-bdcf-64237347b964-host" (OuterVolumeSpecName: "host") pod "e3e8f7bf-19af-4aaf-bdcf-64237347b964" (UID: "e3e8f7bf-19af-4aaf-bdcf-64237347b964"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:49:16 crc kubenswrapper[4689]: I1210 12:49:16.464614 4689 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e3e8f7bf-19af-4aaf-bdcf-64237347b964-host\") on node \"crc\" DevicePath \"\"" Dec 10 12:49:16 crc kubenswrapper[4689]: I1210 12:49:16.470251 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3e8f7bf-19af-4aaf-bdcf-64237347b964-kube-api-access-t5v7l" (OuterVolumeSpecName: "kube-api-access-t5v7l") pod "e3e8f7bf-19af-4aaf-bdcf-64237347b964" (UID: "e3e8f7bf-19af-4aaf-bdcf-64237347b964"). InnerVolumeSpecName "kube-api-access-t5v7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:49:16 crc kubenswrapper[4689]: I1210 12:49:16.507449 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3e8f7bf-19af-4aaf-bdcf-64237347b964" path="/var/lib/kubelet/pods/e3e8f7bf-19af-4aaf-bdcf-64237347b964/volumes" Dec 10 12:49:16 crc kubenswrapper[4689]: I1210 12:49:16.567647 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5v7l\" (UniqueName: \"kubernetes.io/projected/e3e8f7bf-19af-4aaf-bdcf-64237347b964-kube-api-access-t5v7l\") on node \"crc\" DevicePath \"\"" Dec 10 12:49:17 crc kubenswrapper[4689]: I1210 12:49:17.185354 4689 scope.go:117] "RemoveContainer" containerID="8e720474e120ed4d2a724f804c4bebd73242bdd46209e08fdc216392caddd2b7" Dec 10 12:49:17 crc kubenswrapper[4689]: I1210 12:49:17.186010 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-grfmr/crc-debug-w4ntt" Dec 10 12:49:17 crc kubenswrapper[4689]: I1210 12:49:17.487567 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-grfmr/crc-debug-k5psl"] Dec 10 12:49:17 crc kubenswrapper[4689]: E1210 12:49:17.488384 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3e8f7bf-19af-4aaf-bdcf-64237347b964" containerName="container-00" Dec 10 12:49:17 crc kubenswrapper[4689]: I1210 12:49:17.488401 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3e8f7bf-19af-4aaf-bdcf-64237347b964" containerName="container-00" Dec 10 12:49:17 crc kubenswrapper[4689]: I1210 12:49:17.488625 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3e8f7bf-19af-4aaf-bdcf-64237347b964" containerName="container-00" Dec 10 12:49:17 crc kubenswrapper[4689]: I1210 12:49:17.489455 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-grfmr/crc-debug-k5psl" Dec 10 12:49:17 crc kubenswrapper[4689]: I1210 12:49:17.494859 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-grfmr"/"default-dockercfg-w6xmg" Dec 10 12:49:17 crc kubenswrapper[4689]: I1210 12:49:17.584392 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2950d722-7faf-4952-9526-a2e68828230c-host\") pod \"crc-debug-k5psl\" (UID: \"2950d722-7faf-4952-9526-a2e68828230c\") " pod="openshift-must-gather-grfmr/crc-debug-k5psl" Dec 10 12:49:17 crc kubenswrapper[4689]: I1210 12:49:17.584584 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f4qc\" (UniqueName: \"kubernetes.io/projected/2950d722-7faf-4952-9526-a2e68828230c-kube-api-access-5f4qc\") pod \"crc-debug-k5psl\" (UID: \"2950d722-7faf-4952-9526-a2e68828230c\") " pod="openshift-must-gather-grfmr/crc-debug-k5psl" Dec 10 12:49:17 crc kubenswrapper[4689]: I1210 12:49:17.686852 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2950d722-7faf-4952-9526-a2e68828230c-host\") pod \"crc-debug-k5psl\" (UID: \"2950d722-7faf-4952-9526-a2e68828230c\") " pod="openshift-must-gather-grfmr/crc-debug-k5psl" Dec 10 12:49:17 crc kubenswrapper[4689]: I1210 12:49:17.687006 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f4qc\" (UniqueName: \"kubernetes.io/projected/2950d722-7faf-4952-9526-a2e68828230c-kube-api-access-5f4qc\") pod \"crc-debug-k5psl\" (UID: \"2950d722-7faf-4952-9526-a2e68828230c\") " pod="openshift-must-gather-grfmr/crc-debug-k5psl" Dec 10 12:49:17 crc kubenswrapper[4689]: I1210 12:49:17.687228 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2950d722-7faf-4952-9526-a2e68828230c-host\") pod \"crc-debug-k5psl\" (UID: \"2950d722-7faf-4952-9526-a2e68828230c\") " pod="openshift-must-gather-grfmr/crc-debug-k5psl" Dec 10 12:49:17 crc kubenswrapper[4689]: I1210 12:49:17.714844 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f4qc\" (UniqueName: \"kubernetes.io/projected/2950d722-7faf-4952-9526-a2e68828230c-kube-api-access-5f4qc\") pod \"crc-debug-k5psl\" (UID: \"2950d722-7faf-4952-9526-a2e68828230c\") " pod="openshift-must-gather-grfmr/crc-debug-k5psl" Dec 10 12:49:17 crc kubenswrapper[4689]: I1210 12:49:17.806483 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-grfmr/crc-debug-k5psl" Dec 10 12:49:18 crc kubenswrapper[4689]: I1210 12:49:18.206553 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-grfmr/crc-debug-k5psl" event={"ID":"2950d722-7faf-4952-9526-a2e68828230c","Type":"ContainerStarted","Data":"fa6af3bf10e9a449a42fdc81b7d1da7abc75c5fab17e1b778a7df26a079df499"} Dec 10 12:49:18 crc kubenswrapper[4689]: I1210 12:49:18.209059 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-grfmr/crc-debug-k5psl" event={"ID":"2950d722-7faf-4952-9526-a2e68828230c","Type":"ContainerStarted","Data":"330f5664137a4d7236ae424e475589a1d1c03af4ff5539851a51843737250084"} Dec 10 12:49:18 crc kubenswrapper[4689]: I1210 12:49:18.226953 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-grfmr/crc-debug-k5psl" podStartSLOduration=1.226936648 podStartE2EDuration="1.226936648s" podCreationTimestamp="2025-12-10 12:49:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:49:18.224865646 +0000 UTC m=+2026.012946804" watchObservedRunningTime="2025-12-10 12:49:18.226936648 +0000 UTC m=+2026.015017786" Dec 10 12:49:20 crc kubenswrapper[4689]: I1210 12:49:20.228147 4689 generic.go:334] "Generic (PLEG): container finished" podID="2950d722-7faf-4952-9526-a2e68828230c" containerID="fa6af3bf10e9a449a42fdc81b7d1da7abc75c5fab17e1b778a7df26a079df499" exitCode=0 Dec 10 12:49:20 crc kubenswrapper[4689]: I1210 12:49:20.228186 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-grfmr/crc-debug-k5psl" event={"ID":"2950d722-7faf-4952-9526-a2e68828230c","Type":"ContainerDied","Data":"fa6af3bf10e9a449a42fdc81b7d1da7abc75c5fab17e1b778a7df26a079df499"} Dec 10 12:49:21 crc kubenswrapper[4689]: I1210 12:49:21.340270 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-grfmr/crc-debug-k5psl" Dec 10 12:49:21 crc kubenswrapper[4689]: I1210 12:49:21.358454 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f4qc\" (UniqueName: \"kubernetes.io/projected/2950d722-7faf-4952-9526-a2e68828230c-kube-api-access-5f4qc\") pod \"2950d722-7faf-4952-9526-a2e68828230c\" (UID: \"2950d722-7faf-4952-9526-a2e68828230c\") " Dec 10 12:49:21 crc kubenswrapper[4689]: I1210 12:49:21.358618 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2950d722-7faf-4952-9526-a2e68828230c-host\") pod \"2950d722-7faf-4952-9526-a2e68828230c\" (UID: \"2950d722-7faf-4952-9526-a2e68828230c\") " Dec 10 12:49:21 crc kubenswrapper[4689]: I1210 12:49:21.358888 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2950d722-7faf-4952-9526-a2e68828230c-host" (OuterVolumeSpecName: "host") pod "2950d722-7faf-4952-9526-a2e68828230c" (UID: "2950d722-7faf-4952-9526-a2e68828230c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:49:21 crc kubenswrapper[4689]: I1210 12:49:21.359145 4689 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2950d722-7faf-4952-9526-a2e68828230c-host\") on node \"crc\" DevicePath \"\"" Dec 10 12:49:21 crc kubenswrapper[4689]: I1210 12:49:21.364751 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2950d722-7faf-4952-9526-a2e68828230c-kube-api-access-5f4qc" (OuterVolumeSpecName: "kube-api-access-5f4qc") pod "2950d722-7faf-4952-9526-a2e68828230c" (UID: "2950d722-7faf-4952-9526-a2e68828230c"). InnerVolumeSpecName "kube-api-access-5f4qc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:49:21 crc kubenswrapper[4689]: I1210 12:49:21.385070 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-grfmr/crc-debug-k5psl"] Dec 10 12:49:21 crc kubenswrapper[4689]: I1210 12:49:21.393930 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-grfmr/crc-debug-k5psl"] Dec 10 12:49:21 crc kubenswrapper[4689]: I1210 12:49:21.460178 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f4qc\" (UniqueName: \"kubernetes.io/projected/2950d722-7faf-4952-9526-a2e68828230c-kube-api-access-5f4qc\") on node \"crc\" DevicePath \"\"" Dec 10 12:49:22 crc kubenswrapper[4689]: I1210 12:49:22.255922 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="330f5664137a4d7236ae424e475589a1d1c03af4ff5539851a51843737250084" Dec 10 12:49:22 crc kubenswrapper[4689]: I1210 12:49:22.256359 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-grfmr/crc-debug-k5psl" Dec 10 12:49:22 crc kubenswrapper[4689]: I1210 12:49:22.511687 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2950d722-7faf-4952-9526-a2e68828230c" path="/var/lib/kubelet/pods/2950d722-7faf-4952-9526-a2e68828230c/volumes" Dec 10 12:49:22 crc kubenswrapper[4689]: I1210 12:49:22.543070 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-grfmr/crc-debug-k66x6"] Dec 10 12:49:22 crc kubenswrapper[4689]: E1210 12:49:22.543882 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2950d722-7faf-4952-9526-a2e68828230c" containerName="container-00" Dec 10 12:49:22 crc kubenswrapper[4689]: I1210 12:49:22.544012 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="2950d722-7faf-4952-9526-a2e68828230c" containerName="container-00" Dec 10 12:49:22 crc kubenswrapper[4689]: I1210 12:49:22.544383 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="2950d722-7faf-4952-9526-a2e68828230c" containerName="container-00" Dec 10 12:49:22 crc kubenswrapper[4689]: I1210 12:49:22.545219 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-grfmr/crc-debug-k66x6" Dec 10 12:49:22 crc kubenswrapper[4689]: I1210 12:49:22.549348 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-grfmr"/"default-dockercfg-w6xmg" Dec 10 12:49:22 crc kubenswrapper[4689]: I1210 12:49:22.682323 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjnpv\" (UniqueName: \"kubernetes.io/projected/c70a141e-ea93-4094-97db-2924059b18dd-kube-api-access-zjnpv\") pod \"crc-debug-k66x6\" (UID: \"c70a141e-ea93-4094-97db-2924059b18dd\") " pod="openshift-must-gather-grfmr/crc-debug-k66x6" Dec 10 12:49:22 crc kubenswrapper[4689]: I1210 12:49:22.682411 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c70a141e-ea93-4094-97db-2924059b18dd-host\") pod \"crc-debug-k66x6\" (UID: \"c70a141e-ea93-4094-97db-2924059b18dd\") " pod="openshift-must-gather-grfmr/crc-debug-k66x6" Dec 10 12:49:22 crc kubenswrapper[4689]: I1210 12:49:22.785194 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjnpv\" (UniqueName: \"kubernetes.io/projected/c70a141e-ea93-4094-97db-2924059b18dd-kube-api-access-zjnpv\") pod \"crc-debug-k66x6\" (UID: \"c70a141e-ea93-4094-97db-2924059b18dd\") " pod="openshift-must-gather-grfmr/crc-debug-k66x6" Dec 10 12:49:22 crc kubenswrapper[4689]: I1210 12:49:22.785290 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c70a141e-ea93-4094-97db-2924059b18dd-host\") pod \"crc-debug-k66x6\" (UID: \"c70a141e-ea93-4094-97db-2924059b18dd\") " pod="openshift-must-gather-grfmr/crc-debug-k66x6" Dec 10 12:49:22 crc kubenswrapper[4689]: I1210 12:49:22.785518 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c70a141e-ea93-4094-97db-2924059b18dd-host\") pod \"crc-debug-k66x6\" (UID: \"c70a141e-ea93-4094-97db-2924059b18dd\") " pod="openshift-must-gather-grfmr/crc-debug-k66x6" Dec 10 12:49:22 crc kubenswrapper[4689]: I1210 12:49:22.804129 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjnpv\" (UniqueName: \"kubernetes.io/projected/c70a141e-ea93-4094-97db-2924059b18dd-kube-api-access-zjnpv\") pod \"crc-debug-k66x6\" (UID: \"c70a141e-ea93-4094-97db-2924059b18dd\") " pod="openshift-must-gather-grfmr/crc-debug-k66x6" Dec 10 12:49:22 crc kubenswrapper[4689]: I1210 12:49:22.874601 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-grfmr/crc-debug-k66x6" Dec 10 12:49:23 crc kubenswrapper[4689]: I1210 12:49:23.268279 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-grfmr/crc-debug-k66x6" event={"ID":"c70a141e-ea93-4094-97db-2924059b18dd","Type":"ContainerStarted","Data":"130df6392b4a489d68bccaaf96fa1c5e68d35a91827c89c6716eaa9049bc4897"} Dec 10 12:49:24 crc kubenswrapper[4689]: I1210 12:49:24.283452 4689 generic.go:334] "Generic (PLEG): container finished" podID="c70a141e-ea93-4094-97db-2924059b18dd" containerID="6ed3a5f7f0ba5daaa77e6e8e663036300302d04f669d89aab38ec705c05207eb" exitCode=0 Dec 10 12:49:24 crc kubenswrapper[4689]: I1210 12:49:24.283516 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-grfmr/crc-debug-k66x6" event={"ID":"c70a141e-ea93-4094-97db-2924059b18dd","Type":"ContainerDied","Data":"6ed3a5f7f0ba5daaa77e6e8e663036300302d04f669d89aab38ec705c05207eb"} Dec 10 12:49:24 crc kubenswrapper[4689]: I1210 12:49:24.345758 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-grfmr/crc-debug-k66x6"] Dec 10 12:49:24 crc kubenswrapper[4689]: I1210 12:49:24.356910 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-grfmr/crc-debug-k66x6"] Dec 10 12:49:25 crc kubenswrapper[4689]: I1210 12:49:25.445642 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-grfmr/crc-debug-k66x6" Dec 10 12:49:25 crc kubenswrapper[4689]: I1210 12:49:25.647518 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjnpv\" (UniqueName: \"kubernetes.io/projected/c70a141e-ea93-4094-97db-2924059b18dd-kube-api-access-zjnpv\") pod \"c70a141e-ea93-4094-97db-2924059b18dd\" (UID: \"c70a141e-ea93-4094-97db-2924059b18dd\") " Dec 10 12:49:25 crc kubenswrapper[4689]: I1210 12:49:25.648376 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c70a141e-ea93-4094-97db-2924059b18dd-host\") pod \"c70a141e-ea93-4094-97db-2924059b18dd\" (UID: \"c70a141e-ea93-4094-97db-2924059b18dd\") " Dec 10 12:49:25 crc kubenswrapper[4689]: I1210 12:49:25.648619 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c70a141e-ea93-4094-97db-2924059b18dd-host" (OuterVolumeSpecName: "host") pod "c70a141e-ea93-4094-97db-2924059b18dd" (UID: "c70a141e-ea93-4094-97db-2924059b18dd"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:49:25 crc kubenswrapper[4689]: I1210 12:49:25.648864 4689 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c70a141e-ea93-4094-97db-2924059b18dd-host\") on node \"crc\" DevicePath \"\"" Dec 10 12:49:25 crc kubenswrapper[4689]: I1210 12:49:25.675102 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c70a141e-ea93-4094-97db-2924059b18dd-kube-api-access-zjnpv" (OuterVolumeSpecName: "kube-api-access-zjnpv") pod "c70a141e-ea93-4094-97db-2924059b18dd" (UID: "c70a141e-ea93-4094-97db-2924059b18dd"). InnerVolumeSpecName "kube-api-access-zjnpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:49:25 crc kubenswrapper[4689]: I1210 12:49:25.750142 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjnpv\" (UniqueName: \"kubernetes.io/projected/c70a141e-ea93-4094-97db-2924059b18dd-kube-api-access-zjnpv\") on node \"crc\" DevicePath \"\"" Dec 10 12:49:26 crc kubenswrapper[4689]: I1210 12:49:26.325440 4689 scope.go:117] "RemoveContainer" containerID="6ed3a5f7f0ba5daaa77e6e8e663036300302d04f669d89aab38ec705c05207eb" Dec 10 12:49:26 crc kubenswrapper[4689]: I1210 12:49:26.325465 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-grfmr/crc-debug-k66x6" Dec 10 12:49:26 crc kubenswrapper[4689]: I1210 12:49:26.508022 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c70a141e-ea93-4094-97db-2924059b18dd" path="/var/lib/kubelet/pods/c70a141e-ea93-4094-97db-2924059b18dd/volumes" Dec 10 12:49:37 crc kubenswrapper[4689]: I1210 12:49:37.166852 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 12:49:37 crc kubenswrapper[4689]: I1210 12:49:37.167539 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 12:49:37 crc kubenswrapper[4689]: I1210 12:49:37.167598 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" Dec 10 12:49:37 crc kubenswrapper[4689]: I1210 12:49:37.168429 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5d1e2ae8645d18ab83e20a9f28e88007d921095b25a0a5bebc871640151a35c6"} pod="openshift-machine-config-operator/machine-config-daemon-db6zk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 12:49:37 crc kubenswrapper[4689]: I1210 12:49:37.168494 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" containerID="cri-o://5d1e2ae8645d18ab83e20a9f28e88007d921095b25a0a5bebc871640151a35c6" gracePeriod=600 Dec 10 12:49:37 crc kubenswrapper[4689]: I1210 12:49:37.431634 4689 generic.go:334] "Generic (PLEG): container finished" podID="a41ebdcd-910f-4669-992d-296e1a92162b" containerID="5d1e2ae8645d18ab83e20a9f28e88007d921095b25a0a5bebc871640151a35c6" exitCode=0 Dec 10 12:49:37 crc kubenswrapper[4689]: I1210 12:49:37.431767 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" event={"ID":"a41ebdcd-910f-4669-992d-296e1a92162b","Type":"ContainerDied","Data":"5d1e2ae8645d18ab83e20a9f28e88007d921095b25a0a5bebc871640151a35c6"} Dec 10 12:49:37 crc kubenswrapper[4689]: I1210 12:49:37.432154 4689 scope.go:117] "RemoveContainer" containerID="12c9249e05008e8640dad38e14c40fbe87bf771f5c2f12128bca16301c4173e9" Dec 10 12:49:38 crc kubenswrapper[4689]: I1210 12:49:38.446182 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" event={"ID":"a41ebdcd-910f-4669-992d-296e1a92162b","Type":"ContainerStarted","Data":"ce6f9dfd50adadcec59ac06d9a711336ee8675f152d7050ec5daf10842869d64"} Dec 10 12:49:41 crc kubenswrapper[4689]: I1210 12:49:41.455928 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-74c576f5cb-ljzfq_159bb08c-9220-4d78-9b24-4b8293139a23/barbican-api/0.log" Dec 10 12:49:41 crc kubenswrapper[4689]: I1210 12:49:41.634605 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-74c576f5cb-ljzfq_159bb08c-9220-4d78-9b24-4b8293139a23/barbican-api-log/0.log" Dec 10 12:49:41 crc kubenswrapper[4689]: I1210 12:49:41.686647 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7869fbbf6d-5mmw9_0ecdef57-40ee-46b4-a739-3f8fd2354018/barbican-keystone-listener/0.log" Dec 10 12:49:41 crc kubenswrapper[4689]: I1210 12:49:41.777165 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7869fbbf6d-5mmw9_0ecdef57-40ee-46b4-a739-3f8fd2354018/barbican-keystone-listener-log/0.log" Dec 10 12:49:41 crc kubenswrapper[4689]: I1210 12:49:41.874131 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6b8cb45d75-89dpw_5a9faa39-da67-436d-884a-06d93286633e/barbican-worker/0.log" Dec 10 12:49:41 crc kubenswrapper[4689]: I1210 12:49:41.963308 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6b8cb45d75-89dpw_5a9faa39-da67-436d-884a-06d93286633e/barbican-worker-log/0.log" Dec 10 12:49:42 crc kubenswrapper[4689]: I1210 12:49:42.176808 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b9c7f257-7dbf-4a84-8b3f-060db6f93454/proxy-httpd/0.log" Dec 10 12:49:42 crc kubenswrapper[4689]: I1210 12:49:42.486083 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b9c7f257-7dbf-4a84-8b3f-060db6f93454/ceilometer-notification-agent/0.log" Dec 10 12:49:42 crc kubenswrapper[4689]: I1210 12:49:42.486856 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b9c7f257-7dbf-4a84-8b3f-060db6f93454/ceilometer-central-agent/0.log" Dec 10 12:49:42 crc kubenswrapper[4689]: I1210 12:49:42.521527 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b9c7f257-7dbf-4a84-8b3f-060db6f93454/sg-core/0.log" Dec 10 12:49:42 crc kubenswrapper[4689]: I1210 12:49:42.889912 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3033a7b9-9374-47c4-89a2-188204ccd941/cinder-api-log/0.log" Dec 10 12:49:42 crc kubenswrapper[4689]: I1210 12:49:42.900257 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3033a7b9-9374-47c4-89a2-188204ccd941/cinder-api/0.log" Dec 10 12:49:42 crc kubenswrapper[4689]: I1210 12:49:42.929718 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_8e8f74b8-b74a-42eb-97ec-28680f9999a4/cinder-scheduler/0.log" Dec 10 12:49:43 crc kubenswrapper[4689]: I1210 12:49:43.104960 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_8e8f74b8-b74a-42eb-97ec-28680f9999a4/probe/0.log" Dec 10 12:49:43 crc kubenswrapper[4689]: I1210 12:49:43.124877 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-59cf4bdb65-m75sl_e1683950-c036-44c9-9ad3-5e91fee6c3ba/init/0.log" Dec 10 12:49:43 crc kubenswrapper[4689]: I1210 12:49:43.246297 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-59cf4bdb65-m75sl_e1683950-c036-44c9-9ad3-5e91fee6c3ba/dnsmasq-dns/0.log" Dec 10 12:49:43 crc kubenswrapper[4689]: I1210 12:49:43.306725 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ce240e41-0473-47c3-8349-854caa2baad2/glance-httpd/0.log" Dec 10 12:49:43 crc kubenswrapper[4689]: I1210 12:49:43.317253 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-59cf4bdb65-m75sl_e1683950-c036-44c9-9ad3-5e91fee6c3ba/init/0.log" Dec 10 12:49:43 crc kubenswrapper[4689]: I1210 12:49:43.435918 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ce240e41-0473-47c3-8349-854caa2baad2/glance-log/0.log" Dec 10 12:49:43 crc kubenswrapper[4689]: I1210 12:49:43.513961 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_3783c348-e04a-4246-ae21-d47d6bae3467/glance-httpd/0.log" Dec 10 12:49:43 crc kubenswrapper[4689]: I1210 12:49:43.571366 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_3783c348-e04a-4246-ae21-d47d6bae3467/glance-log/0.log" Dec 10 12:49:43 crc kubenswrapper[4689]: I1210 12:49:43.711727 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_23e46f1d-5919-4baa-aeef-1364104b63fb/init/0.log" Dec 10 12:49:43 crc kubenswrapper[4689]: I1210 12:49:43.903532 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_23e46f1d-5919-4baa-aeef-1364104b63fb/ironic-python-agent-init/0.log" Dec 10 12:49:43 crc kubenswrapper[4689]: I1210 12:49:43.912199 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_23e46f1d-5919-4baa-aeef-1364104b63fb/init/0.log" Dec 10 12:49:44 crc kubenswrapper[4689]: I1210 12:49:44.010084 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_23e46f1d-5919-4baa-aeef-1364104b63fb/ironic-python-agent-init/0.log" Dec 10 12:49:44 crc kubenswrapper[4689]: I1210 12:49:44.188330 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_23e46f1d-5919-4baa-aeef-1364104b63fb/init/0.log" Dec 10 12:49:44 crc kubenswrapper[4689]: I1210 12:49:44.202365 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_23e46f1d-5919-4baa-aeef-1364104b63fb/ironic-python-agent-init/0.log" Dec 10 12:49:44 crc kubenswrapper[4689]: I1210 12:49:44.596391 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_23e46f1d-5919-4baa-aeef-1364104b63fb/init/0.log" Dec 10 12:49:44 crc kubenswrapper[4689]: I1210 12:49:44.811087 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_23e46f1d-5919-4baa-aeef-1364104b63fb/ironic-python-agent-init/0.log" Dec 10 12:49:44 crc kubenswrapper[4689]: I1210 12:49:44.865892 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_23e46f1d-5919-4baa-aeef-1364104b63fb/pxe-init/0.log" Dec 10 12:49:45 crc kubenswrapper[4689]: I1210 12:49:45.102386 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_23e46f1d-5919-4baa-aeef-1364104b63fb/httpboot/0.log" Dec 10 12:49:45 crc kubenswrapper[4689]: I1210 12:49:45.400948 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_23e46f1d-5919-4baa-aeef-1364104b63fb/ironic-conductor/0.log" Dec 10 12:49:45 crc kubenswrapper[4689]: I1210 12:49:45.642353 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_23e46f1d-5919-4baa-aeef-1364104b63fb/ramdisk-logs/0.log" Dec 10 12:49:45 crc kubenswrapper[4689]: I1210 12:49:45.649788 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_23e46f1d-5919-4baa-aeef-1364104b63fb/pxe-init/0.log" Dec 10 12:49:45 crc kubenswrapper[4689]: I1210 12:49:45.829128 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_23e46f1d-5919-4baa-aeef-1364104b63fb/pxe-init/0.log" Dec 10 12:49:45 crc kubenswrapper[4689]: I1210 12:49:45.879429 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-jz2g4_574d5244-06f3-49f9-b8b8-93bd57d4fc35/init/0.log" Dec 10 12:49:46 crc kubenswrapper[4689]: I1210 12:49:46.066316 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_23e46f1d-5919-4baa-aeef-1364104b63fb/pxe-init/0.log" Dec 10 12:49:46 crc kubenswrapper[4689]: I1210 12:49:46.085132 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-jz2g4_574d5244-06f3-49f9-b8b8-93bd57d4fc35/init/0.log" Dec 10 12:49:46 crc kubenswrapper[4689]: I1210 12:49:46.121691 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-jz2g4_574d5244-06f3-49f9-b8b8-93bd57d4fc35/ironic-db-sync/0.log" Dec 10 12:49:46 crc kubenswrapper[4689]: I1210 12:49:46.169997 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-dbcd8ff8b-j52fs_8c984e40-d3ee-426e-ab51-c576bc699e11/init/0.log" Dec 10 12:49:46 crc kubenswrapper[4689]: I1210 12:49:46.363952 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-dbcd8ff8b-j52fs_8c984e40-d3ee-426e-ab51-c576bc699e11/init/0.log" Dec 10 12:49:46 crc kubenswrapper[4689]: I1210 12:49:46.405866 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-dbcd8ff8b-j52fs_8c984e40-d3ee-426e-ab51-c576bc699e11/ironic-api/0.log" Dec 10 12:49:46 crc kubenswrapper[4689]: I1210 12:49:46.424895 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-dbcd8ff8b-j52fs_8c984e40-d3ee-426e-ab51-c576bc699e11/ironic-api-log/0.log" Dec 10 12:49:46 crc kubenswrapper[4689]: I1210 12:49:46.434200 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_5a770426-b384-4fc7-acc0-fa42ff536a9b/ironic-python-agent-init/0.log" Dec 10 12:49:46 crc kubenswrapper[4689]: I1210 12:49:46.621783 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_5a770426-b384-4fc7-acc0-fa42ff536a9b/ironic-python-agent-init/0.log" Dec 10 12:49:46 crc kubenswrapper[4689]: I1210 12:49:46.725539 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_5a770426-b384-4fc7-acc0-fa42ff536a9b/inspector-pxe-init/0.log" Dec 10 12:49:46 crc kubenswrapper[4689]: I1210 12:49:46.829852 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_5a770426-b384-4fc7-acc0-fa42ff536a9b/inspector-pxe-init/0.log" Dec 10 12:49:46 crc kubenswrapper[4689]: I1210 12:49:46.961938 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_5a770426-b384-4fc7-acc0-fa42ff536a9b/ironic-python-agent-init/0.log" Dec 10 12:49:46 crc kubenswrapper[4689]: I1210 12:49:46.980872 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_5a770426-b384-4fc7-acc0-fa42ff536a9b/inspector-pxe-init/0.log" Dec 10 12:49:47 crc kubenswrapper[4689]: I1210 12:49:47.011761 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_5a770426-b384-4fc7-acc0-fa42ff536a9b/inspector-httpboot/0.log" Dec 10 12:49:47 crc kubenswrapper[4689]: I1210 12:49:47.052660 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_5a770426-b384-4fc7-acc0-fa42ff536a9b/ironic-inspector/1.log" Dec 10 12:49:47 crc kubenswrapper[4689]: I1210 12:49:47.053946 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_5a770426-b384-4fc7-acc0-fa42ff536a9b/ironic-inspector/2.log" Dec 10 12:49:47 crc kubenswrapper[4689]: I1210 12:49:47.241127 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_5a770426-b384-4fc7-acc0-fa42ff536a9b/ironic-inspector-httpd/0.log" Dec 10 12:49:47 crc kubenswrapper[4689]: I1210 12:49:47.335489 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_5a770426-b384-4fc7-acc0-fa42ff536a9b/ramdisk-logs/0.log" Dec 10 12:49:47 crc kubenswrapper[4689]: I1210 12:49:47.342207 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-db-sync-9zldg_0ebb276c-ffb4-490e-bf4b-c55c0c49aa43/ironic-inspector-db-sync/0.log" Dec 10 12:49:47 crc kubenswrapper[4689]: I1210 12:49:47.501028 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-neutron-agent-6f4566d7bf-hkx2g_234f8267-1974-4f9e-9d13-8a239ff2660c/ironic-neutron-agent/2.log" Dec 10 12:49:47 crc kubenswrapper[4689]: I1210 12:49:47.511166 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-neutron-agent-6f4566d7bf-hkx2g_234f8267-1974-4f9e-9d13-8a239ff2660c/ironic-neutron-agent/1.log" Dec 10 12:49:47 crc kubenswrapper[4689]: I1210 12:49:47.721929 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-85bc68c5bb-jxqf7_5ab8f9bd-1d66-4142-afe8-1cfce8e5f736/keystone-api/0.log" Dec 10 12:49:47 crc kubenswrapper[4689]: I1210 12:49:47.753988 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_9e13ec23-1267-498e-9d74-fcfc8aee69b6/kube-state-metrics/0.log" Dec 10 12:49:48 crc kubenswrapper[4689]: I1210 12:49:48.022129 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-67dc569cfc-xmxfj_84537f57-e77b-4147-99e4-d22fa43780cb/neutron-httpd/0.log" Dec 10 12:49:48 crc kubenswrapper[4689]: I1210 12:49:48.058376 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-67dc569cfc-xmxfj_84537f57-e77b-4147-99e4-d22fa43780cb/neutron-api/0.log" Dec 10 12:49:48 crc kubenswrapper[4689]: I1210 12:49:48.293821 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f9ec0888-b017-49bb-b11e-feb543a1db7e/nova-api-api/0.log" Dec 10 12:49:48 crc kubenswrapper[4689]: I1210 12:49:48.381201 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f9ec0888-b017-49bb-b11e-feb543a1db7e/nova-api-log/0.log" Dec 10 12:49:48 crc kubenswrapper[4689]: I1210 12:49:48.417174 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_3e65d1ea-f92e-4a74-944d-f9fcc1f1ae34/nova-cell0-conductor-conductor/0.log" Dec 10 12:49:48 crc kubenswrapper[4689]: I1210 12:49:48.697192 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_ab709567-2151-49a7-b199-aeca8ee0ae19/nova-cell1-conductor-conductor/0.log" Dec 10 12:49:48 crc kubenswrapper[4689]: I1210 12:49:48.780350 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_c9d9a204-d8fb-4bb8-b864-14178e550382/nova-cell1-novncproxy-novncproxy/0.log" Dec 10 12:49:49 crc kubenswrapper[4689]: I1210 12:49:49.004862 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_11d4f9aa-dff6-4df0-9f6e-ead4097006a0/nova-metadata-log/0.log" Dec 10 12:49:49 crc kubenswrapper[4689]: I1210 12:49:49.230123 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_11566d27-371a-412a-a9f0-b147b642f173/nova-scheduler-scheduler/0.log" Dec 10 12:49:49 crc kubenswrapper[4689]: I1210 12:49:49.305540 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_11d4f9aa-dff6-4df0-9f6e-ead4097006a0/nova-metadata-metadata/0.log" Dec 10 12:49:49 crc kubenswrapper[4689]: I1210 12:49:49.487830 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_49777225-4829-4cb0-bdd3-3e29ee4f0518/mysql-bootstrap/0.log" Dec 10 12:49:49 crc kubenswrapper[4689]: I1210 12:49:49.660610 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_49777225-4829-4cb0-bdd3-3e29ee4f0518/galera/0.log" Dec 10 12:49:49 crc kubenswrapper[4689]: I1210 12:49:49.772535 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_5de970c2-b559-4b8a-86f2-85b07c2292b1/mysql-bootstrap/0.log" Dec 10 12:49:49 crc kubenswrapper[4689]: I1210 12:49:49.772947 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_49777225-4829-4cb0-bdd3-3e29ee4f0518/mysql-bootstrap/0.log" Dec 10 12:49:49 crc kubenswrapper[4689]: I1210 12:49:49.974670 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_5de970c2-b559-4b8a-86f2-85b07c2292b1/mysql-bootstrap/0.log" Dec 10 12:49:50 crc kubenswrapper[4689]: I1210 12:49:50.050181 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_5de970c2-b559-4b8a-86f2-85b07c2292b1/galera/0.log" Dec 10 12:49:50 crc kubenswrapper[4689]: I1210 12:49:50.224224 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_9ba6aa57-fcd5-4e81-aeec-18115df06abb/openstackclient/0.log" Dec 10 12:49:50 crc kubenswrapper[4689]: I1210 12:49:50.479127 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-jpztp_58cb894b-f745-4d93-8925-193c6ff871a6/ovn-controller/0.log" Dec 10 12:49:50 crc kubenswrapper[4689]: I1210 12:49:50.514361 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-q4rdb_ed1dfb51-596d-4779-9d07-37b566b30adf/openstack-network-exporter/0.log" Dec 10 12:49:50 crc kubenswrapper[4689]: I1210 12:49:50.713840 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-29j5j_a1966558-be4f-4607-a746-739845ac6c46/ovsdb-server-init/0.log" Dec 10 12:49:50 crc kubenswrapper[4689]: I1210 12:49:50.913726 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-29j5j_a1966558-be4f-4607-a746-739845ac6c46/ovsdb-server/0.log" Dec 10 12:49:50 crc kubenswrapper[4689]: I1210 12:49:50.952844 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-29j5j_a1966558-be4f-4607-a746-739845ac6c46/ovsdb-server-init/0.log" Dec 10 12:49:50 crc kubenswrapper[4689]: I1210 12:49:50.974745 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-29j5j_a1966558-be4f-4607-a746-739845ac6c46/ovs-vswitchd/0.log" Dec 10 12:49:51 crc kubenswrapper[4689]: I1210 12:49:51.132520 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1d70ed4a-58eb-456e-bd2a-9b3c199a94bf/openstack-network-exporter/0.log" Dec 10 12:49:51 crc kubenswrapper[4689]: I1210 12:49:51.184760 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1d70ed4a-58eb-456e-bd2a-9b3c199a94bf/ovn-northd/0.log" Dec 10 12:49:51 crc kubenswrapper[4689]: I1210 12:49:51.231318 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_9c534934-f8a1-4029-8135-b190d180128a/openstack-network-exporter/0.log" Dec 10 12:49:51 crc kubenswrapper[4689]: I1210 12:49:51.346815 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_9c534934-f8a1-4029-8135-b190d180128a/ovsdbserver-nb/0.log" Dec 10 12:49:51 crc kubenswrapper[4689]: I1210 12:49:51.451485 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c18b8eee-7b2e-494c-b4e4-7896aa80a1ed/openstack-network-exporter/0.log" Dec 10 12:49:51 crc kubenswrapper[4689]: I1210 12:49:51.521953 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c18b8eee-7b2e-494c-b4e4-7896aa80a1ed/ovsdbserver-sb/0.log" Dec 10 12:49:51 crc kubenswrapper[4689]: I1210 12:49:51.709102 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-b85ffb7d4-zq54p_6708e3ec-a080-4f8d-a9c5-2821ea678717/placement-api/0.log" Dec 10 12:49:51 crc kubenswrapper[4689]: I1210 12:49:51.739697 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-b85ffb7d4-zq54p_6708e3ec-a080-4f8d-a9c5-2821ea678717/placement-log/0.log" Dec 10 12:49:51 crc kubenswrapper[4689]: I1210 12:49:51.847606 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_400622a6-9529-4712-85a9-03f48e2b1819/setup-container/0.log" Dec 10 12:49:52 crc kubenswrapper[4689]: I1210 12:49:52.080747 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_400622a6-9529-4712-85a9-03f48e2b1819/setup-container/0.log" Dec 10 12:49:52 crc kubenswrapper[4689]: I1210 12:49:52.096771 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_148aec72-272a-46b6-a75f-46dc2b680101/setup-container/0.log" Dec 10 12:49:52 crc kubenswrapper[4689]: I1210 12:49:52.103253 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_400622a6-9529-4712-85a9-03f48e2b1819/rabbitmq/0.log" Dec 10 12:49:52 crc kubenswrapper[4689]: I1210 12:49:52.402883 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_148aec72-272a-46b6-a75f-46dc2b680101/rabbitmq/0.log" Dec 10 12:49:52 crc kubenswrapper[4689]: I1210 12:49:52.444078 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_148aec72-272a-46b6-a75f-46dc2b680101/setup-container/0.log" Dec 10 12:49:52 crc kubenswrapper[4689]: I1210 12:49:52.498049 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-799574f579-slqvw_7a378a34-ca1b-41cb-85d9-97d124f4f6dc/proxy-httpd/0.log" Dec 10 12:49:52 crc kubenswrapper[4689]: I1210 12:49:52.643452 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-799574f579-slqvw_7a378a34-ca1b-41cb-85d9-97d124f4f6dc/proxy-server/0.log" Dec 10 12:49:52 crc kubenswrapper[4689]: I1210 12:49:52.705061 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-2mlkq_a4b24dc3-c04f-46a0-a0c0-0240d3d767cd/swift-ring-rebalance/0.log" Dec 10 12:49:52 crc kubenswrapper[4689]: I1210 12:49:52.875052 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4b54476-e438-46d8-b234-c8f661f5c26f/account-reaper/0.log" Dec 10 12:49:53 crc kubenswrapper[4689]: I1210 12:49:53.106107 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4b54476-e438-46d8-b234-c8f661f5c26f/account-auditor/0.log" Dec 10 12:49:53 crc kubenswrapper[4689]: I1210 12:49:53.117373 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4b54476-e438-46d8-b234-c8f661f5c26f/account-replicator/0.log" Dec 10 12:49:53 crc kubenswrapper[4689]: I1210 12:49:53.166211 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4b54476-e438-46d8-b234-c8f661f5c26f/account-server/0.log" Dec 10 12:49:53 crc kubenswrapper[4689]: I1210 12:49:53.276942 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4b54476-e438-46d8-b234-c8f661f5c26f/container-auditor/0.log" Dec 10 12:49:53 crc kubenswrapper[4689]: I1210 12:49:53.280751 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4b54476-e438-46d8-b234-c8f661f5c26f/container-replicator/0.log" Dec 10 12:49:53 crc kubenswrapper[4689]: I1210 12:49:53.312489 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4b54476-e438-46d8-b234-c8f661f5c26f/container-server/0.log" Dec 10 12:49:53 crc kubenswrapper[4689]: I1210 12:49:53.377239 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4b54476-e438-46d8-b234-c8f661f5c26f/container-updater/0.log" Dec 10 12:49:53 crc kubenswrapper[4689]: I1210 12:49:53.501979 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4b54476-e438-46d8-b234-c8f661f5c26f/object-auditor/0.log" Dec 10 12:49:53 crc kubenswrapper[4689]: I1210 12:49:53.515256 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4b54476-e438-46d8-b234-c8f661f5c26f/object-replicator/0.log" Dec 10 12:49:53 crc kubenswrapper[4689]: I1210 12:49:53.540494 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4b54476-e438-46d8-b234-c8f661f5c26f/object-expirer/0.log" Dec 10 12:49:53 crc kubenswrapper[4689]: I1210 12:49:53.636004 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4b54476-e438-46d8-b234-c8f661f5c26f/object-server/0.log" Dec 10 12:49:53 crc kubenswrapper[4689]: I1210 12:49:53.720268 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4b54476-e438-46d8-b234-c8f661f5c26f/object-updater/0.log" Dec 10 12:49:53 crc kubenswrapper[4689]: I1210 12:49:53.731682 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4b54476-e438-46d8-b234-c8f661f5c26f/rsync/0.log" Dec 10 12:49:53 crc kubenswrapper[4689]: I1210 12:49:53.782806 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4b54476-e438-46d8-b234-c8f661f5c26f/swift-recon-cron/0.log" Dec 10 12:49:57 crc kubenswrapper[4689]: I1210 12:49:57.240028 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_e5d9caa2-f209-4a7f-a0d8-353aa111c264/memcached/0.log" Dec 10 12:50:16 crc kubenswrapper[4689]: I1210 12:50:16.761778 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45_c5985c32-65e5-453a-a56e-a95411e80db0/util/0.log" Dec 10 12:50:16 crc kubenswrapper[4689]: I1210 12:50:16.936892 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45_c5985c32-65e5-453a-a56e-a95411e80db0/pull/0.log" Dec 10 12:50:16 crc kubenswrapper[4689]: I1210 12:50:16.966193 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45_c5985c32-65e5-453a-a56e-a95411e80db0/util/0.log" Dec 10 12:50:17 crc kubenswrapper[4689]: I1210 12:50:17.003160 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45_c5985c32-65e5-453a-a56e-a95411e80db0/pull/0.log" Dec 10 12:50:17 crc kubenswrapper[4689]: I1210 12:50:17.184608 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45_c5985c32-65e5-453a-a56e-a95411e80db0/pull/0.log" Dec 10 12:50:17 crc kubenswrapper[4689]: I1210 12:50:17.191140 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45_c5985c32-65e5-453a-a56e-a95411e80db0/util/0.log" Dec 10 12:50:17 crc kubenswrapper[4689]: I1210 12:50:17.246300 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45_c5985c32-65e5-453a-a56e-a95411e80db0/extract/0.log" Dec 10 12:50:17 crc kubenswrapper[4689]: I1210 12:50:17.374802 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-46w9f_ec9c74bb-c8dc-409b-817c-74963a395df8/kube-rbac-proxy/0.log" Dec 10 12:50:17 crc kubenswrapper[4689]: I1210 12:50:17.466888 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-46w9f_ec9c74bb-c8dc-409b-817c-74963a395df8/manager/0.log" Dec 10 12:50:17 crc kubenswrapper[4689]: I1210 12:50:17.555756 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-4hdrw_e6db2b03-cb28-4161-bae6-6eecce28c871/kube-rbac-proxy/0.log" Dec 10 12:50:17 crc kubenswrapper[4689]: I1210 12:50:17.609790 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-4hdrw_e6db2b03-cb28-4161-bae6-6eecce28c871/manager/0.log" Dec 10 12:50:17 crc kubenswrapper[4689]: I1210 12:50:17.732216 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-dtclt_0b413153-162d-46ad-9b9a-b44869127ee7/manager/0.log" Dec 10 12:50:17 crc kubenswrapper[4689]: I1210 12:50:17.757928 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-dtclt_0b413153-162d-46ad-9b9a-b44869127ee7/kube-rbac-proxy/0.log" Dec 10 12:50:17 crc kubenswrapper[4689]: I1210 12:50:17.910392 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-tzbn5_670989a1-8b21-473a-8624-862930a7d70b/kube-rbac-proxy/0.log" Dec 10 12:50:18 crc kubenswrapper[4689]: I1210 12:50:18.049940 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-tzbn5_670989a1-8b21-473a-8624-862930a7d70b/manager/0.log" Dec 10 12:50:18 crc kubenswrapper[4689]: I1210 12:50:18.097587 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-6fmnw_e9f4ae72-b49e-4144-a49b-72c2bbd1b77c/kube-rbac-proxy/0.log" Dec 10 12:50:18 crc kubenswrapper[4689]: I1210 12:50:18.118555 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-6fmnw_e9f4ae72-b49e-4144-a49b-72c2bbd1b77c/manager/0.log" Dec 10 12:50:18 crc kubenswrapper[4689]: I1210 12:50:18.245514 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-gk2zb_c70e0866-b017-4945-9e4b-c69eec327948/kube-rbac-proxy/0.log" Dec 10 12:50:18 crc kubenswrapper[4689]: I1210 12:50:18.314662 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-gk2zb_c70e0866-b017-4945-9e4b-c69eec327948/manager/0.log" Dec 10 12:50:18 crc kubenswrapper[4689]: I1210 12:50:18.418581 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-lkvwb_13ec50ac-3e46-4615-88f9-070c7a647158/kube-rbac-proxy/0.log" Dec 10 12:50:18 crc kubenswrapper[4689]: I1210 12:50:18.545365 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69f4484999-lb8n4_e1c471e3-8ecc-4db9-95c0-a4a13e287aba/kube-rbac-proxy/0.log" Dec 10 12:50:18 crc kubenswrapper[4689]: I1210 12:50:18.656510 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-lkvwb_13ec50ac-3e46-4615-88f9-070c7a647158/manager/0.log" Dec 10 12:50:18 crc kubenswrapper[4689]: I1210 12:50:18.866308 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69f4484999-lb8n4_e1c471e3-8ecc-4db9-95c0-a4a13e287aba/manager/0.log" Dec 10 12:50:18 crc kubenswrapper[4689]: I1210 12:50:18.878053 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-59fd99cc6f-gdgcj_c346bbde-239c-4f76-91da-c4116ad0a487/kube-rbac-proxy/0.log" Dec 10 12:50:19 crc kubenswrapper[4689]: I1210 12:50:19.011605 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-59fd99cc6f-gdgcj_c346bbde-239c-4f76-91da-c4116ad0a487/manager/0.log" Dec 10 12:50:19 crc kubenswrapper[4689]: I1210 12:50:19.057671 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-n69ks_4dc52122-5456-453a-9d5f-d2fce910bb61/kube-rbac-proxy/0.log" Dec 10 12:50:19 crc kubenswrapper[4689]: I1210 12:50:19.101618 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-n69ks_4dc52122-5456-453a-9d5f-d2fce910bb61/manager/0.log" Dec 10 12:50:19 crc kubenswrapper[4689]: I1210 12:50:19.258293 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-rwg47_8209377b-970c-4faf-ac5b-1e429d2bdccd/manager/0.log" Dec 10 12:50:19 crc kubenswrapper[4689]: I1210 12:50:19.265763 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-rwg47_8209377b-970c-4faf-ac5b-1e429d2bdccd/kube-rbac-proxy/0.log" Dec 10 12:50:19 crc kubenswrapper[4689]: I1210 12:50:19.462732 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-66xh9_7ca7fc40-0bb8-402f-9e73-d1d267340b28/kube-rbac-proxy/0.log" Dec 10 12:50:19 crc kubenswrapper[4689]: I1210 12:50:19.489955 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-qlxtq_dd93e8ba-afd7-4d03-917f-873352cfefc8/kube-rbac-proxy/0.log" Dec 10 12:50:19 crc kubenswrapper[4689]: I1210 12:50:19.552400 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-66xh9_7ca7fc40-0bb8-402f-9e73-d1d267340b28/manager/0.log" Dec 10 12:50:19 crc kubenswrapper[4689]: I1210 12:50:19.705549 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-qlxtq_dd93e8ba-afd7-4d03-917f-873352cfefc8/manager/0.log" Dec 10 12:50:19 crc kubenswrapper[4689]: I1210 12:50:19.773825 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-pwvc2_04286780-356c-4f76-9168-5a80c36d2aa3/kube-rbac-proxy/0.log" Dec 10 12:50:19 crc kubenswrapper[4689]: I1210 12:50:19.825283 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-pwvc2_04286780-356c-4f76-9168-5a80c36d2aa3/manager/0.log" Dec 10 12:50:19 crc kubenswrapper[4689]: I1210 12:50:19.968706 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879fwx7br_3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a/manager/0.log" Dec 10 12:50:19 crc kubenswrapper[4689]: I1210 12:50:19.980101 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879fwx7br_3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a/kube-rbac-proxy/0.log" Dec 10 12:50:20 crc kubenswrapper[4689]: I1210 12:50:20.444388 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-6476f95cff-ktc7c_a138024c-6885-4a4b-abc6-e4cec00348d6/operator/0.log" Dec 10 12:50:20 crc kubenswrapper[4689]: I1210 12:50:20.447525 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-4crnk_baa246ec-275c-44ad-9e71-8aace1bf29b0/registry-server/0.log" Dec 10 12:50:20 crc kubenswrapper[4689]: I1210 12:50:20.623887 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-v7wvl_8c68b5ee-e36f-428f-8b70-480581f7e120/kube-rbac-proxy/0.log" Dec 10 12:50:20 crc kubenswrapper[4689]: I1210 12:50:20.755915 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-v7wvl_8c68b5ee-e36f-428f-8b70-480581f7e120/manager/0.log" Dec 10 12:50:20 crc kubenswrapper[4689]: I1210 12:50:20.817814 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-44v2s_e616a259-dbf9-469a-987e-b3a6f36044a4/kube-rbac-proxy/0.log" Dec 10 12:50:20 crc kubenswrapper[4689]: I1210 12:50:20.848337 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-44v2s_e616a259-dbf9-469a-987e-b3a6f36044a4/manager/0.log" Dec 10 12:50:20 crc kubenswrapper[4689]: I1210 12:50:20.998781 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-czdwk_38a45de6-7988-4cb1-86b8-0164c52f2dc5/operator/0.log" Dec 10 12:50:21 crc kubenswrapper[4689]: I1210 12:50:21.044737 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5997c5ddf6-h459p_af7cc69a-a411-43ec-b32e-41e6a343388b/manager/0.log" Dec 10 12:50:21 crc kubenswrapper[4689]: I1210 12:50:21.055206 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-d995k_08c0b74f-cdff-4fc6-b2cf-4d61fdd4177d/kube-rbac-proxy/0.log" Dec 10 12:50:21 crc kubenswrapper[4689]: I1210 12:50:21.223053 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-d995k_08c0b74f-cdff-4fc6-b2cf-4d61fdd4177d/manager/0.log" Dec 10 12:50:21 crc kubenswrapper[4689]: I1210 12:50:21.249099 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58d5ff84df-8xrb5_9e2487e5-677b-4344-9ab2-d419e03876f2/kube-rbac-proxy/0.log" Dec 10 12:50:21 crc kubenswrapper[4689]: I1210 12:50:21.338363 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58d5ff84df-8xrb5_9e2487e5-677b-4344-9ab2-d419e03876f2/manager/0.log" Dec 10 12:50:21 crc kubenswrapper[4689]: I1210 12:50:21.421421 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-fghxw_20ec28ba-f929-4e94-833b-24a213da89a6/kube-rbac-proxy/0.log" Dec 10 12:50:21 crc kubenswrapper[4689]: I1210 12:50:21.439766 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-fghxw_20ec28ba-f929-4e94-833b-24a213da89a6/manager/0.log" Dec 10 12:50:21 crc kubenswrapper[4689]: I1210 12:50:21.546603 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75944c9b7-rq8x6_88c6fb38-fa6d-497e-87c7-32833f1b5a04/manager/0.log" Dec 10 12:50:21 crc kubenswrapper[4689]: I1210 12:50:21.564856 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75944c9b7-rq8x6_88c6fb38-fa6d-497e-87c7-32833f1b5a04/kube-rbac-proxy/0.log" Dec 10 12:50:40 crc kubenswrapper[4689]: I1210 12:50:40.807297 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-rn6lb_38480cb6-15a9-450a-9efd-71a7d346ef7c/control-plane-machine-set-operator/0.log" Dec 10 12:50:40 crc kubenswrapper[4689]: I1210 12:50:40.980879 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-s8lmw_3eece420-e32c-4ea1-91f8-cc96bf144467/kube-rbac-proxy/0.log" Dec 10 12:50:41 crc kubenswrapper[4689]: I1210 12:50:41.009328 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-s8lmw_3eece420-e32c-4ea1-91f8-cc96bf144467/machine-api-operator/0.log" Dec 10 12:50:49 crc kubenswrapper[4689]: I1210 12:50:49.740033 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dtww7"] Dec 10 12:50:49 crc kubenswrapper[4689]: E1210 12:50:49.741062 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c70a141e-ea93-4094-97db-2924059b18dd" containerName="container-00" Dec 10 12:50:49 crc kubenswrapper[4689]: I1210 12:50:49.741079 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c70a141e-ea93-4094-97db-2924059b18dd" containerName="container-00" Dec 10 12:50:49 crc kubenswrapper[4689]: I1210 12:50:49.741352 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="c70a141e-ea93-4094-97db-2924059b18dd" containerName="container-00" Dec 10 12:50:49 crc kubenswrapper[4689]: I1210 12:50:49.743177 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dtww7" Dec 10 12:50:49 crc kubenswrapper[4689]: I1210 12:50:49.750826 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dtww7"] Dec 10 12:50:49 crc kubenswrapper[4689]: I1210 12:50:49.840774 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9fb148a-68c0-465f-a71c-ccf40bca7940-utilities\") pod \"community-operators-dtww7\" (UID: \"d9fb148a-68c0-465f-a71c-ccf40bca7940\") " pod="openshift-marketplace/community-operators-dtww7" Dec 10 12:50:49 crc kubenswrapper[4689]: I1210 12:50:49.841117 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9fb148a-68c0-465f-a71c-ccf40bca7940-catalog-content\") pod \"community-operators-dtww7\" (UID: \"d9fb148a-68c0-465f-a71c-ccf40bca7940\") " pod="openshift-marketplace/community-operators-dtww7" Dec 10 12:50:49 crc kubenswrapper[4689]: I1210 12:50:49.841148 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k769l\" (UniqueName: \"kubernetes.io/projected/d9fb148a-68c0-465f-a71c-ccf40bca7940-kube-api-access-k769l\") pod \"community-operators-dtww7\" (UID: \"d9fb148a-68c0-465f-a71c-ccf40bca7940\") " pod="openshift-marketplace/community-operators-dtww7" Dec 10 12:50:49 crc kubenswrapper[4689]: I1210 12:50:49.943333 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9fb148a-68c0-465f-a71c-ccf40bca7940-utilities\") pod \"community-operators-dtww7\" (UID: \"d9fb148a-68c0-465f-a71c-ccf40bca7940\") " pod="openshift-marketplace/community-operators-dtww7" Dec 10 12:50:49 crc kubenswrapper[4689]: I1210 12:50:49.943791 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9fb148a-68c0-465f-a71c-ccf40bca7940-utilities\") pod \"community-operators-dtww7\" (UID: \"d9fb148a-68c0-465f-a71c-ccf40bca7940\") " pod="openshift-marketplace/community-operators-dtww7" Dec 10 12:50:49 crc kubenswrapper[4689]: I1210 12:50:49.943859 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9fb148a-68c0-465f-a71c-ccf40bca7940-catalog-content\") pod \"community-operators-dtww7\" (UID: \"d9fb148a-68c0-465f-a71c-ccf40bca7940\") " pod="openshift-marketplace/community-operators-dtww7" Dec 10 12:50:49 crc kubenswrapper[4689]: I1210 12:50:49.943941 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k769l\" (UniqueName: \"kubernetes.io/projected/d9fb148a-68c0-465f-a71c-ccf40bca7940-kube-api-access-k769l\") pod \"community-operators-dtww7\" (UID: \"d9fb148a-68c0-465f-a71c-ccf40bca7940\") " pod="openshift-marketplace/community-operators-dtww7" Dec 10 12:50:49 crc kubenswrapper[4689]: I1210 12:50:49.944309 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9fb148a-68c0-465f-a71c-ccf40bca7940-catalog-content\") pod \"community-operators-dtww7\" (UID: \"d9fb148a-68c0-465f-a71c-ccf40bca7940\") " pod="openshift-marketplace/community-operators-dtww7" Dec 10 12:50:49 crc kubenswrapper[4689]: I1210 12:50:49.973169 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k769l\" (UniqueName: \"kubernetes.io/projected/d9fb148a-68c0-465f-a71c-ccf40bca7940-kube-api-access-k769l\") pod \"community-operators-dtww7\" (UID: \"d9fb148a-68c0-465f-a71c-ccf40bca7940\") " pod="openshift-marketplace/community-operators-dtww7" Dec 10 12:50:50 crc kubenswrapper[4689]: I1210 12:50:50.077808 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dtww7" Dec 10 12:50:50 crc kubenswrapper[4689]: I1210 12:50:50.624692 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dtww7"] Dec 10 12:50:51 crc kubenswrapper[4689]: I1210 12:50:51.081022 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dtww7" event={"ID":"d9fb148a-68c0-465f-a71c-ccf40bca7940","Type":"ContainerStarted","Data":"6b2cad86efbb2f076264c9b5f999762d1cf92938643cfef027bfa3a2059ca5b2"} Dec 10 12:50:51 crc kubenswrapper[4689]: I1210 12:50:51.081360 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dtww7" event={"ID":"d9fb148a-68c0-465f-a71c-ccf40bca7940","Type":"ContainerStarted","Data":"b69abb3fd279e99c3b5d6b0cc21f6dd6d759233b8498eded8ce53380b8a6d1f0"} Dec 10 12:50:51 crc kubenswrapper[4689]: I1210 12:50:51.540242 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9ldlg"] Dec 10 12:50:51 crc kubenswrapper[4689]: I1210 12:50:51.543114 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9ldlg" Dec 10 12:50:51 crc kubenswrapper[4689]: I1210 12:50:51.556093 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ldlg"] Dec 10 12:50:51 crc kubenswrapper[4689]: I1210 12:50:51.680215 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxvlt\" (UniqueName: \"kubernetes.io/projected/5900124f-e59d-4579-a1a9-09fb9f5c31a2-kube-api-access-nxvlt\") pod \"redhat-marketplace-9ldlg\" (UID: \"5900124f-e59d-4579-a1a9-09fb9f5c31a2\") " pod="openshift-marketplace/redhat-marketplace-9ldlg" Dec 10 12:50:51 crc kubenswrapper[4689]: I1210 12:50:51.680440 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5900124f-e59d-4579-a1a9-09fb9f5c31a2-catalog-content\") pod \"redhat-marketplace-9ldlg\" (UID: \"5900124f-e59d-4579-a1a9-09fb9f5c31a2\") " pod="openshift-marketplace/redhat-marketplace-9ldlg" Dec 10 12:50:51 crc kubenswrapper[4689]: I1210 12:50:51.680527 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5900124f-e59d-4579-a1a9-09fb9f5c31a2-utilities\") pod \"redhat-marketplace-9ldlg\" (UID: \"5900124f-e59d-4579-a1a9-09fb9f5c31a2\") " pod="openshift-marketplace/redhat-marketplace-9ldlg" Dec 10 12:50:51 crc kubenswrapper[4689]: I1210 12:50:51.782428 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5900124f-e59d-4579-a1a9-09fb9f5c31a2-catalog-content\") pod \"redhat-marketplace-9ldlg\" (UID: \"5900124f-e59d-4579-a1a9-09fb9f5c31a2\") " pod="openshift-marketplace/redhat-marketplace-9ldlg" Dec 10 12:50:51 crc kubenswrapper[4689]: I1210 12:50:51.782504 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5900124f-e59d-4579-a1a9-09fb9f5c31a2-utilities\") pod \"redhat-marketplace-9ldlg\" (UID: \"5900124f-e59d-4579-a1a9-09fb9f5c31a2\") " pod="openshift-marketplace/redhat-marketplace-9ldlg" Dec 10 12:50:51 crc kubenswrapper[4689]: I1210 12:50:51.782595 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxvlt\" (UniqueName: \"kubernetes.io/projected/5900124f-e59d-4579-a1a9-09fb9f5c31a2-kube-api-access-nxvlt\") pod \"redhat-marketplace-9ldlg\" (UID: \"5900124f-e59d-4579-a1a9-09fb9f5c31a2\") " pod="openshift-marketplace/redhat-marketplace-9ldlg" Dec 10 12:50:51 crc kubenswrapper[4689]: I1210 12:50:51.782982 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5900124f-e59d-4579-a1a9-09fb9f5c31a2-catalog-content\") pod \"redhat-marketplace-9ldlg\" (UID: \"5900124f-e59d-4579-a1a9-09fb9f5c31a2\") " pod="openshift-marketplace/redhat-marketplace-9ldlg" Dec 10 12:50:51 crc kubenswrapper[4689]: I1210 12:50:51.783001 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5900124f-e59d-4579-a1a9-09fb9f5c31a2-utilities\") pod \"redhat-marketplace-9ldlg\" (UID: \"5900124f-e59d-4579-a1a9-09fb9f5c31a2\") " pod="openshift-marketplace/redhat-marketplace-9ldlg" Dec 10 12:50:51 crc kubenswrapper[4689]: I1210 12:50:51.803867 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxvlt\" (UniqueName: \"kubernetes.io/projected/5900124f-e59d-4579-a1a9-09fb9f5c31a2-kube-api-access-nxvlt\") pod \"redhat-marketplace-9ldlg\" (UID: \"5900124f-e59d-4579-a1a9-09fb9f5c31a2\") " pod="openshift-marketplace/redhat-marketplace-9ldlg" Dec 10 12:50:51 crc kubenswrapper[4689]: I1210 12:50:51.902270 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9ldlg" Dec 10 12:50:52 crc kubenswrapper[4689]: I1210 12:50:52.092544 4689 generic.go:334] "Generic (PLEG): container finished" podID="d9fb148a-68c0-465f-a71c-ccf40bca7940" containerID="6b2cad86efbb2f076264c9b5f999762d1cf92938643cfef027bfa3a2059ca5b2" exitCode=0 Dec 10 12:50:52 crc kubenswrapper[4689]: I1210 12:50:52.092631 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dtww7" event={"ID":"d9fb148a-68c0-465f-a71c-ccf40bca7940","Type":"ContainerDied","Data":"6b2cad86efbb2f076264c9b5f999762d1cf92938643cfef027bfa3a2059ca5b2"} Dec 10 12:50:52 crc kubenswrapper[4689]: I1210 12:50:52.146945 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9v9k6"] Dec 10 12:50:52 crc kubenswrapper[4689]: I1210 12:50:52.148912 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9v9k6" Dec 10 12:50:52 crc kubenswrapper[4689]: I1210 12:50:52.158328 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9v9k6"] Dec 10 12:50:52 crc kubenswrapper[4689]: I1210 12:50:52.201346 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c069c181-462b-4b74-8a81-a395cdff6d66-utilities\") pod \"redhat-operators-9v9k6\" (UID: \"c069c181-462b-4b74-8a81-a395cdff6d66\") " pod="openshift-marketplace/redhat-operators-9v9k6" Dec 10 12:50:52 crc kubenswrapper[4689]: I1210 12:50:52.201421 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l6gk\" (UniqueName: \"kubernetes.io/projected/c069c181-462b-4b74-8a81-a395cdff6d66-kube-api-access-9l6gk\") pod \"redhat-operators-9v9k6\" (UID: \"c069c181-462b-4b74-8a81-a395cdff6d66\") " pod="openshift-marketplace/redhat-operators-9v9k6" Dec 10 12:50:52 crc kubenswrapper[4689]: I1210 12:50:52.201451 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c069c181-462b-4b74-8a81-a395cdff6d66-catalog-content\") pod \"redhat-operators-9v9k6\" (UID: \"c069c181-462b-4b74-8a81-a395cdff6d66\") " pod="openshift-marketplace/redhat-operators-9v9k6" Dec 10 12:50:52 crc kubenswrapper[4689]: I1210 12:50:52.303641 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c069c181-462b-4b74-8a81-a395cdff6d66-utilities\") pod \"redhat-operators-9v9k6\" (UID: \"c069c181-462b-4b74-8a81-a395cdff6d66\") " pod="openshift-marketplace/redhat-operators-9v9k6" Dec 10 12:50:52 crc kubenswrapper[4689]: I1210 12:50:52.304020 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l6gk\" (UniqueName: \"kubernetes.io/projected/c069c181-462b-4b74-8a81-a395cdff6d66-kube-api-access-9l6gk\") pod \"redhat-operators-9v9k6\" (UID: \"c069c181-462b-4b74-8a81-a395cdff6d66\") " pod="openshift-marketplace/redhat-operators-9v9k6" Dec 10 12:50:52 crc kubenswrapper[4689]: I1210 12:50:52.304046 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c069c181-462b-4b74-8a81-a395cdff6d66-catalog-content\") pod \"redhat-operators-9v9k6\" (UID: \"c069c181-462b-4b74-8a81-a395cdff6d66\") " pod="openshift-marketplace/redhat-operators-9v9k6" Dec 10 12:50:52 crc kubenswrapper[4689]: I1210 12:50:52.304666 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c069c181-462b-4b74-8a81-a395cdff6d66-catalog-content\") pod \"redhat-operators-9v9k6\" (UID: \"c069c181-462b-4b74-8a81-a395cdff6d66\") " pod="openshift-marketplace/redhat-operators-9v9k6" Dec 10 12:50:52 crc kubenswrapper[4689]: I1210 12:50:52.304900 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c069c181-462b-4b74-8a81-a395cdff6d66-utilities\") pod \"redhat-operators-9v9k6\" (UID: \"c069c181-462b-4b74-8a81-a395cdff6d66\") " pod="openshift-marketplace/redhat-operators-9v9k6" Dec 10 12:50:52 crc kubenswrapper[4689]: I1210 12:50:52.344183 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l6gk\" (UniqueName: \"kubernetes.io/projected/c069c181-462b-4b74-8a81-a395cdff6d66-kube-api-access-9l6gk\") pod \"redhat-operators-9v9k6\" (UID: \"c069c181-462b-4b74-8a81-a395cdff6d66\") " pod="openshift-marketplace/redhat-operators-9v9k6" Dec 10 12:50:52 crc kubenswrapper[4689]: I1210 12:50:52.369837 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ldlg"] Dec 10 12:50:52 crc kubenswrapper[4689]: I1210 12:50:52.473956 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9v9k6" Dec 10 12:50:53 crc kubenswrapper[4689]: I1210 12:50:53.103932 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dtww7" event={"ID":"d9fb148a-68c0-465f-a71c-ccf40bca7940","Type":"ContainerStarted","Data":"08e9c6fb4aaa024cdf26454a07af58752a1ece88c336d4e02eff07c610ae9d1a"} Dec 10 12:50:53 crc kubenswrapper[4689]: I1210 12:50:53.109275 4689 generic.go:334] "Generic (PLEG): container finished" podID="5900124f-e59d-4579-a1a9-09fb9f5c31a2" containerID="cdb742f206d5b7affb9d09d11372cbccd17fa8e3f08ac99423c2d9c8ab4a89fb" exitCode=0 Dec 10 12:50:53 crc kubenswrapper[4689]: I1210 12:50:53.109317 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ldlg" event={"ID":"5900124f-e59d-4579-a1a9-09fb9f5c31a2","Type":"ContainerDied","Data":"cdb742f206d5b7affb9d09d11372cbccd17fa8e3f08ac99423c2d9c8ab4a89fb"} Dec 10 12:50:53 crc kubenswrapper[4689]: I1210 12:50:53.109343 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ldlg" event={"ID":"5900124f-e59d-4579-a1a9-09fb9f5c31a2","Type":"ContainerStarted","Data":"ae6b00f110096e7017f03e546204f88a5ee8a9e9c0d649a0dfd15bb73cc067a3"} Dec 10 12:50:53 crc kubenswrapper[4689]: I1210 12:50:53.120332 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9v9k6"] Dec 10 12:50:54 crc kubenswrapper[4689]: I1210 12:50:54.119552 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ldlg" event={"ID":"5900124f-e59d-4579-a1a9-09fb9f5c31a2","Type":"ContainerStarted","Data":"449db5d92ba5bdbf1e5679c56c09476b85aeeb1a0252b274e763f42695c72a45"} Dec 10 12:50:54 crc kubenswrapper[4689]: I1210 12:50:54.123265 4689 generic.go:334] "Generic (PLEG): container finished" podID="c069c181-462b-4b74-8a81-a395cdff6d66" containerID="f9dcf1cf0201f7b4441827bd0bbb80100dd938efcd6031957e3545e37e698d38" exitCode=0 Dec 10 12:50:54 crc kubenswrapper[4689]: I1210 12:50:54.123341 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9v9k6" event={"ID":"c069c181-462b-4b74-8a81-a395cdff6d66","Type":"ContainerDied","Data":"f9dcf1cf0201f7b4441827bd0bbb80100dd938efcd6031957e3545e37e698d38"} Dec 10 12:50:54 crc kubenswrapper[4689]: I1210 12:50:54.123372 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9v9k6" event={"ID":"c069c181-462b-4b74-8a81-a395cdff6d66","Type":"ContainerStarted","Data":"829d64243ca278ffaa5880b365ac6d8feb66c7dd3e031ffba1544e178df8db69"} Dec 10 12:50:54 crc kubenswrapper[4689]: I1210 12:50:54.127193 4689 generic.go:334] "Generic (PLEG): container finished" podID="d9fb148a-68c0-465f-a71c-ccf40bca7940" containerID="08e9c6fb4aaa024cdf26454a07af58752a1ece88c336d4e02eff07c610ae9d1a" exitCode=0 Dec 10 12:50:54 crc kubenswrapper[4689]: I1210 12:50:54.127223 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dtww7" event={"ID":"d9fb148a-68c0-465f-a71c-ccf40bca7940","Type":"ContainerDied","Data":"08e9c6fb4aaa024cdf26454a07af58752a1ece88c336d4e02eff07c610ae9d1a"} Dec 10 12:50:55 crc kubenswrapper[4689]: I1210 12:50:55.137851 4689 generic.go:334] "Generic (PLEG): container finished" podID="5900124f-e59d-4579-a1a9-09fb9f5c31a2" containerID="449db5d92ba5bdbf1e5679c56c09476b85aeeb1a0252b274e763f42695c72a45" exitCode=0 Dec 10 12:50:55 crc kubenswrapper[4689]: I1210 12:50:55.137889 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ldlg" event={"ID":"5900124f-e59d-4579-a1a9-09fb9f5c31a2","Type":"ContainerDied","Data":"449db5d92ba5bdbf1e5679c56c09476b85aeeb1a0252b274e763f42695c72a45"} Dec 10 12:50:55 crc kubenswrapper[4689]: I1210 12:50:55.150332 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dtww7" event={"ID":"d9fb148a-68c0-465f-a71c-ccf40bca7940","Type":"ContainerStarted","Data":"1eb7af08945cc9b4ef1d618991f058c9d5a0a3fad3a894ecb490196822152bcb"} Dec 10 12:50:55 crc kubenswrapper[4689]: I1210 12:50:55.175023 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dtww7" podStartSLOduration=3.564148379 podStartE2EDuration="6.175005019s" podCreationTimestamp="2025-12-10 12:50:49 +0000 UTC" firstStartedPulling="2025-12-10 12:50:52.094968139 +0000 UTC m=+2119.883049277" lastFinishedPulling="2025-12-10 12:50:54.705824789 +0000 UTC m=+2122.493905917" observedRunningTime="2025-12-10 12:50:55.174004914 +0000 UTC m=+2122.962086072" watchObservedRunningTime="2025-12-10 12:50:55.175005019 +0000 UTC m=+2122.963086157" Dec 10 12:50:55 crc kubenswrapper[4689]: I1210 12:50:55.907043 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-w9qvl_d40a3a14-dc06-4eb8-91fb-3d624202b9bb/cert-manager-controller/0.log" Dec 10 12:50:56 crc kubenswrapper[4689]: I1210 12:50:56.161073 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9v9k6" event={"ID":"c069c181-462b-4b74-8a81-a395cdff6d66","Type":"ContainerStarted","Data":"424d21c7c5672b66d9597c7f39b33ee19042cd7c1269b362013f3648bdaf9a57"} Dec 10 12:50:56 crc kubenswrapper[4689]: I1210 12:50:56.313936 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-v7z9w_66bcc49d-7116-4b58-b150-65dc5f00678a/cert-manager-webhook/0.log" Dec 10 12:50:56 crc kubenswrapper[4689]: I1210 12:50:56.332623 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-bwjp8_9eb2376e-1c8b-4a82-b570-f2f7d8faa957/cert-manager-cainjector/0.log" Dec 10 12:50:59 crc kubenswrapper[4689]: I1210 12:50:59.187160 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ldlg" event={"ID":"5900124f-e59d-4579-a1a9-09fb9f5c31a2","Type":"ContainerStarted","Data":"e55a2fbcf79bc269c578b0f81951e8fb7fa0a0665bd743785c7f04c91a3af0bb"} Dec 10 12:50:59 crc kubenswrapper[4689]: I1210 12:50:59.205550 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9ldlg" podStartSLOduration=4.677157509 podStartE2EDuration="8.205532118s" podCreationTimestamp="2025-12-10 12:50:51 +0000 UTC" firstStartedPulling="2025-12-10 12:50:53.110920016 +0000 UTC m=+2120.899001154" lastFinishedPulling="2025-12-10 12:50:56.639294625 +0000 UTC m=+2124.427375763" observedRunningTime="2025-12-10 12:50:59.204361789 +0000 UTC m=+2126.992442927" watchObservedRunningTime="2025-12-10 12:50:59.205532118 +0000 UTC m=+2126.993613256" Dec 10 12:51:00 crc kubenswrapper[4689]: I1210 12:51:00.078998 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dtww7" Dec 10 12:51:00 crc kubenswrapper[4689]: I1210 12:51:00.079317 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dtww7" Dec 10 12:51:00 crc kubenswrapper[4689]: I1210 12:51:00.137514 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dtww7" Dec 10 12:51:00 crc kubenswrapper[4689]: I1210 12:51:00.198845 4689 generic.go:334] "Generic (PLEG): container finished" podID="c069c181-462b-4b74-8a81-a395cdff6d66" containerID="424d21c7c5672b66d9597c7f39b33ee19042cd7c1269b362013f3648bdaf9a57" exitCode=0 Dec 10 12:51:00 crc kubenswrapper[4689]: I1210 12:51:00.198883 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9v9k6" event={"ID":"c069c181-462b-4b74-8a81-a395cdff6d66","Type":"ContainerDied","Data":"424d21c7c5672b66d9597c7f39b33ee19042cd7c1269b362013f3648bdaf9a57"} Dec 10 12:51:00 crc kubenswrapper[4689]: I1210 12:51:00.338446 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dtww7" Dec 10 12:51:01 crc kubenswrapper[4689]: I1210 12:51:01.903373 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9ldlg" Dec 10 12:51:01 crc kubenswrapper[4689]: I1210 12:51:01.903666 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9ldlg" Dec 10 12:51:01 crc kubenswrapper[4689]: I1210 12:51:01.955045 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9ldlg" Dec 10 12:51:02 crc kubenswrapper[4689]: I1210 12:51:02.529822 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dtww7"] Dec 10 12:51:02 crc kubenswrapper[4689]: I1210 12:51:02.530849 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dtww7" podUID="d9fb148a-68c0-465f-a71c-ccf40bca7940" containerName="registry-server" containerID="cri-o://1eb7af08945cc9b4ef1d618991f058c9d5a0a3fad3a894ecb490196822152bcb" gracePeriod=2 Dec 10 12:51:03 crc kubenswrapper[4689]: I1210 12:51:03.042266 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dtww7" Dec 10 12:51:03 crc kubenswrapper[4689]: I1210 12:51:03.225304 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9v9k6" event={"ID":"c069c181-462b-4b74-8a81-a395cdff6d66","Type":"ContainerStarted","Data":"da5b94d19ba260b791a1392165eac75ee4849bad8c873b3745b127fea7f8a08d"} Dec 10 12:51:03 crc kubenswrapper[4689]: I1210 12:51:03.227521 4689 generic.go:334] "Generic (PLEG): container finished" podID="d9fb148a-68c0-465f-a71c-ccf40bca7940" containerID="1eb7af08945cc9b4ef1d618991f058c9d5a0a3fad3a894ecb490196822152bcb" exitCode=0 Dec 10 12:51:03 crc kubenswrapper[4689]: I1210 12:51:03.227548 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dtww7" event={"ID":"d9fb148a-68c0-465f-a71c-ccf40bca7940","Type":"ContainerDied","Data":"1eb7af08945cc9b4ef1d618991f058c9d5a0a3fad3a894ecb490196822152bcb"} Dec 10 12:51:03 crc kubenswrapper[4689]: I1210 12:51:03.227569 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dtww7" event={"ID":"d9fb148a-68c0-465f-a71c-ccf40bca7940","Type":"ContainerDied","Data":"b69abb3fd279e99c3b5d6b0cc21f6dd6d759233b8498eded8ce53380b8a6d1f0"} Dec 10 12:51:03 crc kubenswrapper[4689]: I1210 12:51:03.227590 4689 scope.go:117] "RemoveContainer" containerID="1eb7af08945cc9b4ef1d618991f058c9d5a0a3fad3a894ecb490196822152bcb" Dec 10 12:51:03 crc kubenswrapper[4689]: I1210 12:51:03.227619 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dtww7" Dec 10 12:51:03 crc kubenswrapper[4689]: I1210 12:51:03.229941 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k769l\" (UniqueName: \"kubernetes.io/projected/d9fb148a-68c0-465f-a71c-ccf40bca7940-kube-api-access-k769l\") pod \"d9fb148a-68c0-465f-a71c-ccf40bca7940\" (UID: \"d9fb148a-68c0-465f-a71c-ccf40bca7940\") " Dec 10 12:51:03 crc kubenswrapper[4689]: I1210 12:51:03.230152 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9fb148a-68c0-465f-a71c-ccf40bca7940-utilities\") pod \"d9fb148a-68c0-465f-a71c-ccf40bca7940\" (UID: \"d9fb148a-68c0-465f-a71c-ccf40bca7940\") " Dec 10 12:51:03 crc kubenswrapper[4689]: I1210 12:51:03.230240 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9fb148a-68c0-465f-a71c-ccf40bca7940-catalog-content\") pod \"d9fb148a-68c0-465f-a71c-ccf40bca7940\" (UID: \"d9fb148a-68c0-465f-a71c-ccf40bca7940\") " Dec 10 12:51:03 crc kubenswrapper[4689]: I1210 12:51:03.230912 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9fb148a-68c0-465f-a71c-ccf40bca7940-utilities" (OuterVolumeSpecName: "utilities") pod "d9fb148a-68c0-465f-a71c-ccf40bca7940" (UID: "d9fb148a-68c0-465f-a71c-ccf40bca7940"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:51:03 crc kubenswrapper[4689]: I1210 12:51:03.231483 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9fb148a-68c0-465f-a71c-ccf40bca7940-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 12:51:03 crc kubenswrapper[4689]: I1210 12:51:03.236112 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9fb148a-68c0-465f-a71c-ccf40bca7940-kube-api-access-k769l" (OuterVolumeSpecName: "kube-api-access-k769l") pod "d9fb148a-68c0-465f-a71c-ccf40bca7940" (UID: "d9fb148a-68c0-465f-a71c-ccf40bca7940"). InnerVolumeSpecName "kube-api-access-k769l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:51:03 crc kubenswrapper[4689]: I1210 12:51:03.253699 4689 scope.go:117] "RemoveContainer" containerID="08e9c6fb4aaa024cdf26454a07af58752a1ece88c336d4e02eff07c610ae9d1a" Dec 10 12:51:03 crc kubenswrapper[4689]: I1210 12:51:03.262619 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9v9k6" podStartSLOduration=2.890018521 podStartE2EDuration="11.262597488s" podCreationTimestamp="2025-12-10 12:50:52 +0000 UTC" firstStartedPulling="2025-12-10 12:50:54.124799292 +0000 UTC m=+2121.912880430" lastFinishedPulling="2025-12-10 12:51:02.497378259 +0000 UTC m=+2130.285459397" observedRunningTime="2025-12-10 12:51:03.250121373 +0000 UTC m=+2131.038202521" watchObservedRunningTime="2025-12-10 12:51:03.262597488 +0000 UTC m=+2131.050678626" Dec 10 12:51:03 crc kubenswrapper[4689]: I1210 12:51:03.303796 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9fb148a-68c0-465f-a71c-ccf40bca7940-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d9fb148a-68c0-465f-a71c-ccf40bca7940" (UID: "d9fb148a-68c0-465f-a71c-ccf40bca7940"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:51:03 crc kubenswrapper[4689]: I1210 12:51:03.304546 4689 scope.go:117] "RemoveContainer" containerID="6b2cad86efbb2f076264c9b5f999762d1cf92938643cfef027bfa3a2059ca5b2" Dec 10 12:51:03 crc kubenswrapper[4689]: I1210 12:51:03.323228 4689 scope.go:117] "RemoveContainer" containerID="1eb7af08945cc9b4ef1d618991f058c9d5a0a3fad3a894ecb490196822152bcb" Dec 10 12:51:03 crc kubenswrapper[4689]: E1210 12:51:03.323823 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1eb7af08945cc9b4ef1d618991f058c9d5a0a3fad3a894ecb490196822152bcb\": container with ID starting with 1eb7af08945cc9b4ef1d618991f058c9d5a0a3fad3a894ecb490196822152bcb not found: ID does not exist" containerID="1eb7af08945cc9b4ef1d618991f058c9d5a0a3fad3a894ecb490196822152bcb" Dec 10 12:51:03 crc kubenswrapper[4689]: I1210 12:51:03.323907 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1eb7af08945cc9b4ef1d618991f058c9d5a0a3fad3a894ecb490196822152bcb"} err="failed to get container status \"1eb7af08945cc9b4ef1d618991f058c9d5a0a3fad3a894ecb490196822152bcb\": rpc error: code = NotFound desc = could not find container \"1eb7af08945cc9b4ef1d618991f058c9d5a0a3fad3a894ecb490196822152bcb\": container with ID starting with 1eb7af08945cc9b4ef1d618991f058c9d5a0a3fad3a894ecb490196822152bcb not found: ID does not exist" Dec 10 12:51:03 crc kubenswrapper[4689]: I1210 12:51:03.324001 4689 scope.go:117] "RemoveContainer" containerID="08e9c6fb4aaa024cdf26454a07af58752a1ece88c336d4e02eff07c610ae9d1a" Dec 10 12:51:03 crc kubenswrapper[4689]: E1210 12:51:03.324572 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08e9c6fb4aaa024cdf26454a07af58752a1ece88c336d4e02eff07c610ae9d1a\": container with ID starting with 08e9c6fb4aaa024cdf26454a07af58752a1ece88c336d4e02eff07c610ae9d1a not found: ID does not exist" containerID="08e9c6fb4aaa024cdf26454a07af58752a1ece88c336d4e02eff07c610ae9d1a" Dec 10 12:51:03 crc kubenswrapper[4689]: I1210 12:51:03.324683 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08e9c6fb4aaa024cdf26454a07af58752a1ece88c336d4e02eff07c610ae9d1a"} err="failed to get container status \"08e9c6fb4aaa024cdf26454a07af58752a1ece88c336d4e02eff07c610ae9d1a\": rpc error: code = NotFound desc = could not find container \"08e9c6fb4aaa024cdf26454a07af58752a1ece88c336d4e02eff07c610ae9d1a\": container with ID starting with 08e9c6fb4aaa024cdf26454a07af58752a1ece88c336d4e02eff07c610ae9d1a not found: ID does not exist" Dec 10 12:51:03 crc kubenswrapper[4689]: I1210 12:51:03.324751 4689 scope.go:117] "RemoveContainer" containerID="6b2cad86efbb2f076264c9b5f999762d1cf92938643cfef027bfa3a2059ca5b2" Dec 10 12:51:03 crc kubenswrapper[4689]: E1210 12:51:03.325012 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b2cad86efbb2f076264c9b5f999762d1cf92938643cfef027bfa3a2059ca5b2\": container with ID starting with 6b2cad86efbb2f076264c9b5f999762d1cf92938643cfef027bfa3a2059ca5b2 not found: ID does not exist" containerID="6b2cad86efbb2f076264c9b5f999762d1cf92938643cfef027bfa3a2059ca5b2" Dec 10 12:51:03 crc kubenswrapper[4689]: I1210 12:51:03.325099 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b2cad86efbb2f076264c9b5f999762d1cf92938643cfef027bfa3a2059ca5b2"} err="failed to get container status \"6b2cad86efbb2f076264c9b5f999762d1cf92938643cfef027bfa3a2059ca5b2\": rpc error: code = NotFound desc = could not find container \"6b2cad86efbb2f076264c9b5f999762d1cf92938643cfef027bfa3a2059ca5b2\": container with ID starting with 6b2cad86efbb2f076264c9b5f999762d1cf92938643cfef027bfa3a2059ca5b2 not found: ID does not exist" Dec 10 12:51:03 crc kubenswrapper[4689]: I1210 12:51:03.333491 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k769l\" (UniqueName: \"kubernetes.io/projected/d9fb148a-68c0-465f-a71c-ccf40bca7940-kube-api-access-k769l\") on node \"crc\" DevicePath \"\"" Dec 10 12:51:03 crc kubenswrapper[4689]: I1210 12:51:03.333542 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9fb148a-68c0-465f-a71c-ccf40bca7940-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 12:51:03 crc kubenswrapper[4689]: I1210 12:51:03.556713 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dtww7"] Dec 10 12:51:03 crc kubenswrapper[4689]: I1210 12:51:03.588064 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dtww7"] Dec 10 12:51:04 crc kubenswrapper[4689]: I1210 12:51:04.507893 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9fb148a-68c0-465f-a71c-ccf40bca7940" path="/var/lib/kubelet/pods/d9fb148a-68c0-465f-a71c-ccf40bca7940/volumes" Dec 10 12:51:10 crc kubenswrapper[4689]: I1210 12:51:10.772157 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-97b9n_b3a78d11-7f23-4c35-aee3-0b2a7a19a041/nmstate-console-plugin/0.log" Dec 10 12:51:10 crc kubenswrapper[4689]: I1210 12:51:10.904711 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-mvj78_7f919f52-0709-4db5-8158-8be0da507d54/nmstate-handler/0.log" Dec 10 12:51:10 crc kubenswrapper[4689]: I1210 12:51:10.967466 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-sf9vn_dfc59f59-cc82-4d61-ac2b-e9dbf2b0f5fd/kube-rbac-proxy/0.log" Dec 10 12:51:11 crc kubenswrapper[4689]: I1210 12:51:11.008791 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-sf9vn_dfc59f59-cc82-4d61-ac2b-e9dbf2b0f5fd/nmstate-metrics/0.log" Dec 10 12:51:11 crc kubenswrapper[4689]: I1210 12:51:11.170206 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-5ffhx_2c83e0d9-b9ea-4b8a-9f11-7921eff53640/nmstate-operator/0.log" Dec 10 12:51:11 crc kubenswrapper[4689]: I1210 12:51:11.233375 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-wghvl_6b4e153d-87c2-4ebb-b47c-77f12331ab68/nmstate-webhook/0.log" Dec 10 12:51:11 crc kubenswrapper[4689]: I1210 12:51:11.962758 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9ldlg" Dec 10 12:51:12 crc kubenswrapper[4689]: I1210 12:51:12.021250 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ldlg"] Dec 10 12:51:12 crc kubenswrapper[4689]: I1210 12:51:12.300941 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9ldlg" podUID="5900124f-e59d-4579-a1a9-09fb9f5c31a2" containerName="registry-server" containerID="cri-o://e55a2fbcf79bc269c578b0f81951e8fb7fa0a0665bd743785c7f04c91a3af0bb" gracePeriod=2 Dec 10 12:51:12 crc kubenswrapper[4689]: I1210 12:51:12.474507 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9v9k6" Dec 10 12:51:12 crc kubenswrapper[4689]: I1210 12:51:12.474738 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9v9k6" Dec 10 12:51:12 crc kubenswrapper[4689]: I1210 12:51:12.551224 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9v9k6" Dec 10 12:51:12 crc kubenswrapper[4689]: I1210 12:51:12.837893 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9ldlg" Dec 10 12:51:13 crc kubenswrapper[4689]: I1210 12:51:13.009941 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5900124f-e59d-4579-a1a9-09fb9f5c31a2-utilities\") pod \"5900124f-e59d-4579-a1a9-09fb9f5c31a2\" (UID: \"5900124f-e59d-4579-a1a9-09fb9f5c31a2\") " Dec 10 12:51:13 crc kubenswrapper[4689]: I1210 12:51:13.010045 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5900124f-e59d-4579-a1a9-09fb9f5c31a2-catalog-content\") pod \"5900124f-e59d-4579-a1a9-09fb9f5c31a2\" (UID: \"5900124f-e59d-4579-a1a9-09fb9f5c31a2\") " Dec 10 12:51:13 crc kubenswrapper[4689]: I1210 12:51:13.010076 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxvlt\" (UniqueName: \"kubernetes.io/projected/5900124f-e59d-4579-a1a9-09fb9f5c31a2-kube-api-access-nxvlt\") pod \"5900124f-e59d-4579-a1a9-09fb9f5c31a2\" (UID: \"5900124f-e59d-4579-a1a9-09fb9f5c31a2\") " Dec 10 12:51:13 crc kubenswrapper[4689]: I1210 12:51:13.011962 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5900124f-e59d-4579-a1a9-09fb9f5c31a2-utilities" (OuterVolumeSpecName: "utilities") pod "5900124f-e59d-4579-a1a9-09fb9f5c31a2" (UID: "5900124f-e59d-4579-a1a9-09fb9f5c31a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:51:13 crc kubenswrapper[4689]: I1210 12:51:13.019775 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5900124f-e59d-4579-a1a9-09fb9f5c31a2-kube-api-access-nxvlt" (OuterVolumeSpecName: "kube-api-access-nxvlt") pod "5900124f-e59d-4579-a1a9-09fb9f5c31a2" (UID: "5900124f-e59d-4579-a1a9-09fb9f5c31a2"). InnerVolumeSpecName "kube-api-access-nxvlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:51:13 crc kubenswrapper[4689]: I1210 12:51:13.036291 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5900124f-e59d-4579-a1a9-09fb9f5c31a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5900124f-e59d-4579-a1a9-09fb9f5c31a2" (UID: "5900124f-e59d-4579-a1a9-09fb9f5c31a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:51:13 crc kubenswrapper[4689]: I1210 12:51:13.112712 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5900124f-e59d-4579-a1a9-09fb9f5c31a2-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 12:51:13 crc kubenswrapper[4689]: I1210 12:51:13.112761 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5900124f-e59d-4579-a1a9-09fb9f5c31a2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 12:51:13 crc kubenswrapper[4689]: I1210 12:51:13.112776 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxvlt\" (UniqueName: \"kubernetes.io/projected/5900124f-e59d-4579-a1a9-09fb9f5c31a2-kube-api-access-nxvlt\") on node \"crc\" DevicePath \"\"" Dec 10 12:51:13 crc kubenswrapper[4689]: I1210 12:51:13.309874 4689 generic.go:334] "Generic (PLEG): container finished" podID="5900124f-e59d-4579-a1a9-09fb9f5c31a2" containerID="e55a2fbcf79bc269c578b0f81951e8fb7fa0a0665bd743785c7f04c91a3af0bb" exitCode=0 Dec 10 12:51:13 crc kubenswrapper[4689]: I1210 12:51:13.309929 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9ldlg" Dec 10 12:51:13 crc kubenswrapper[4689]: I1210 12:51:13.309965 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ldlg" event={"ID":"5900124f-e59d-4579-a1a9-09fb9f5c31a2","Type":"ContainerDied","Data":"e55a2fbcf79bc269c578b0f81951e8fb7fa0a0665bd743785c7f04c91a3af0bb"} Dec 10 12:51:13 crc kubenswrapper[4689]: I1210 12:51:13.310046 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ldlg" event={"ID":"5900124f-e59d-4579-a1a9-09fb9f5c31a2","Type":"ContainerDied","Data":"ae6b00f110096e7017f03e546204f88a5ee8a9e9c0d649a0dfd15bb73cc067a3"} Dec 10 12:51:13 crc kubenswrapper[4689]: I1210 12:51:13.310071 4689 scope.go:117] "RemoveContainer" containerID="e55a2fbcf79bc269c578b0f81951e8fb7fa0a0665bd743785c7f04c91a3af0bb" Dec 10 12:51:13 crc kubenswrapper[4689]: I1210 12:51:13.331607 4689 scope.go:117] "RemoveContainer" containerID="449db5d92ba5bdbf1e5679c56c09476b85aeeb1a0252b274e763f42695c72a45" Dec 10 12:51:13 crc kubenswrapper[4689]: I1210 12:51:13.356172 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ldlg"] Dec 10 12:51:13 crc kubenswrapper[4689]: I1210 12:51:13.370988 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ldlg"] Dec 10 12:51:13 crc kubenswrapper[4689]: I1210 12:51:13.371427 4689 scope.go:117] "RemoveContainer" containerID="cdb742f206d5b7affb9d09d11372cbccd17fa8e3f08ac99423c2d9c8ab4a89fb" Dec 10 12:51:13 crc kubenswrapper[4689]: I1210 12:51:13.373801 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9v9k6" Dec 10 12:51:13 crc kubenswrapper[4689]: I1210 12:51:13.424242 4689 scope.go:117] "RemoveContainer" containerID="e55a2fbcf79bc269c578b0f81951e8fb7fa0a0665bd743785c7f04c91a3af0bb" Dec 10 12:51:13 crc kubenswrapper[4689]: E1210 12:51:13.425372 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e55a2fbcf79bc269c578b0f81951e8fb7fa0a0665bd743785c7f04c91a3af0bb\": container with ID starting with e55a2fbcf79bc269c578b0f81951e8fb7fa0a0665bd743785c7f04c91a3af0bb not found: ID does not exist" containerID="e55a2fbcf79bc269c578b0f81951e8fb7fa0a0665bd743785c7f04c91a3af0bb" Dec 10 12:51:13 crc kubenswrapper[4689]: I1210 12:51:13.425409 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e55a2fbcf79bc269c578b0f81951e8fb7fa0a0665bd743785c7f04c91a3af0bb"} err="failed to get container status \"e55a2fbcf79bc269c578b0f81951e8fb7fa0a0665bd743785c7f04c91a3af0bb\": rpc error: code = NotFound desc = could not find container \"e55a2fbcf79bc269c578b0f81951e8fb7fa0a0665bd743785c7f04c91a3af0bb\": container with ID starting with e55a2fbcf79bc269c578b0f81951e8fb7fa0a0665bd743785c7f04c91a3af0bb not found: ID does not exist" Dec 10 12:51:13 crc kubenswrapper[4689]: I1210 12:51:13.425431 4689 scope.go:117] "RemoveContainer" containerID="449db5d92ba5bdbf1e5679c56c09476b85aeeb1a0252b274e763f42695c72a45" Dec 10 12:51:13 crc kubenswrapper[4689]: E1210 12:51:13.425689 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"449db5d92ba5bdbf1e5679c56c09476b85aeeb1a0252b274e763f42695c72a45\": container with ID starting with 449db5d92ba5bdbf1e5679c56c09476b85aeeb1a0252b274e763f42695c72a45 not found: ID does not exist" containerID="449db5d92ba5bdbf1e5679c56c09476b85aeeb1a0252b274e763f42695c72a45" Dec 10 12:51:13 crc kubenswrapper[4689]: I1210 12:51:13.425841 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"449db5d92ba5bdbf1e5679c56c09476b85aeeb1a0252b274e763f42695c72a45"} err="failed to get container status \"449db5d92ba5bdbf1e5679c56c09476b85aeeb1a0252b274e763f42695c72a45\": rpc error: code = NotFound desc = could not find container \"449db5d92ba5bdbf1e5679c56c09476b85aeeb1a0252b274e763f42695c72a45\": container with ID starting with 449db5d92ba5bdbf1e5679c56c09476b85aeeb1a0252b274e763f42695c72a45 not found: ID does not exist" Dec 10 12:51:13 crc kubenswrapper[4689]: I1210 12:51:13.425875 4689 scope.go:117] "RemoveContainer" containerID="cdb742f206d5b7affb9d09d11372cbccd17fa8e3f08ac99423c2d9c8ab4a89fb" Dec 10 12:51:13 crc kubenswrapper[4689]: E1210 12:51:13.426235 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdb742f206d5b7affb9d09d11372cbccd17fa8e3f08ac99423c2d9c8ab4a89fb\": container with ID starting with cdb742f206d5b7affb9d09d11372cbccd17fa8e3f08ac99423c2d9c8ab4a89fb not found: ID does not exist" containerID="cdb742f206d5b7affb9d09d11372cbccd17fa8e3f08ac99423c2d9c8ab4a89fb" Dec 10 12:51:13 crc kubenswrapper[4689]: I1210 12:51:13.426256 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdb742f206d5b7affb9d09d11372cbccd17fa8e3f08ac99423c2d9c8ab4a89fb"} err="failed to get container status \"cdb742f206d5b7affb9d09d11372cbccd17fa8e3f08ac99423c2d9c8ab4a89fb\": rpc error: code = NotFound desc = could not find container \"cdb742f206d5b7affb9d09d11372cbccd17fa8e3f08ac99423c2d9c8ab4a89fb\": container with ID starting with cdb742f206d5b7affb9d09d11372cbccd17fa8e3f08ac99423c2d9c8ab4a89fb not found: ID does not exist" Dec 10 12:51:14 crc kubenswrapper[4689]: I1210 12:51:14.200148 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9v9k6"] Dec 10 12:51:14 crc kubenswrapper[4689]: I1210 12:51:14.507707 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5900124f-e59d-4579-a1a9-09fb9f5c31a2" path="/var/lib/kubelet/pods/5900124f-e59d-4579-a1a9-09fb9f5c31a2/volumes" Dec 10 12:51:15 crc kubenswrapper[4689]: I1210 12:51:15.327385 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9v9k6" podUID="c069c181-462b-4b74-8a81-a395cdff6d66" containerName="registry-server" containerID="cri-o://da5b94d19ba260b791a1392165eac75ee4849bad8c873b3745b127fea7f8a08d" gracePeriod=2 Dec 10 12:51:16 crc kubenswrapper[4689]: I1210 12:51:16.343508 4689 generic.go:334] "Generic (PLEG): container finished" podID="c069c181-462b-4b74-8a81-a395cdff6d66" containerID="da5b94d19ba260b791a1392165eac75ee4849bad8c873b3745b127fea7f8a08d" exitCode=0 Dec 10 12:51:16 crc kubenswrapper[4689]: I1210 12:51:16.343575 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9v9k6" event={"ID":"c069c181-462b-4b74-8a81-a395cdff6d66","Type":"ContainerDied","Data":"da5b94d19ba260b791a1392165eac75ee4849bad8c873b3745b127fea7f8a08d"} Dec 10 12:51:16 crc kubenswrapper[4689]: I1210 12:51:16.344314 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9v9k6" event={"ID":"c069c181-462b-4b74-8a81-a395cdff6d66","Type":"ContainerDied","Data":"829d64243ca278ffaa5880b365ac6d8feb66c7dd3e031ffba1544e178df8db69"} Dec 10 12:51:16 crc kubenswrapper[4689]: I1210 12:51:16.344333 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="829d64243ca278ffaa5880b365ac6d8feb66c7dd3e031ffba1544e178df8db69" Dec 10 12:51:16 crc kubenswrapper[4689]: I1210 12:51:16.346762 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9v9k6" Dec 10 12:51:16 crc kubenswrapper[4689]: I1210 12:51:16.476682 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l6gk\" (UniqueName: \"kubernetes.io/projected/c069c181-462b-4b74-8a81-a395cdff6d66-kube-api-access-9l6gk\") pod \"c069c181-462b-4b74-8a81-a395cdff6d66\" (UID: \"c069c181-462b-4b74-8a81-a395cdff6d66\") " Dec 10 12:51:16 crc kubenswrapper[4689]: I1210 12:51:16.476879 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c069c181-462b-4b74-8a81-a395cdff6d66-utilities\") pod \"c069c181-462b-4b74-8a81-a395cdff6d66\" (UID: \"c069c181-462b-4b74-8a81-a395cdff6d66\") " Dec 10 12:51:16 crc kubenswrapper[4689]: I1210 12:51:16.477041 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c069c181-462b-4b74-8a81-a395cdff6d66-catalog-content\") pod \"c069c181-462b-4b74-8a81-a395cdff6d66\" (UID: \"c069c181-462b-4b74-8a81-a395cdff6d66\") " Dec 10 12:51:16 crc kubenswrapper[4689]: I1210 12:51:16.481049 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c069c181-462b-4b74-8a81-a395cdff6d66-utilities" (OuterVolumeSpecName: "utilities") pod "c069c181-462b-4b74-8a81-a395cdff6d66" (UID: "c069c181-462b-4b74-8a81-a395cdff6d66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:51:16 crc kubenswrapper[4689]: I1210 12:51:16.501210 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c069c181-462b-4b74-8a81-a395cdff6d66-kube-api-access-9l6gk" (OuterVolumeSpecName: "kube-api-access-9l6gk") pod "c069c181-462b-4b74-8a81-a395cdff6d66" (UID: "c069c181-462b-4b74-8a81-a395cdff6d66"). InnerVolumeSpecName "kube-api-access-9l6gk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:51:16 crc kubenswrapper[4689]: I1210 12:51:16.580167 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l6gk\" (UniqueName: \"kubernetes.io/projected/c069c181-462b-4b74-8a81-a395cdff6d66-kube-api-access-9l6gk\") on node \"crc\" DevicePath \"\"" Dec 10 12:51:16 crc kubenswrapper[4689]: I1210 12:51:16.580202 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c069c181-462b-4b74-8a81-a395cdff6d66-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 12:51:16 crc kubenswrapper[4689]: I1210 12:51:16.611673 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c069c181-462b-4b74-8a81-a395cdff6d66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c069c181-462b-4b74-8a81-a395cdff6d66" (UID: "c069c181-462b-4b74-8a81-a395cdff6d66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:51:16 crc kubenswrapper[4689]: I1210 12:51:16.683083 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c069c181-462b-4b74-8a81-a395cdff6d66-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 12:51:17 crc kubenswrapper[4689]: I1210 12:51:17.351777 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9v9k6" Dec 10 12:51:17 crc kubenswrapper[4689]: I1210 12:51:17.385614 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9v9k6"] Dec 10 12:51:17 crc kubenswrapper[4689]: I1210 12:51:17.393444 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9v9k6"] Dec 10 12:51:18 crc kubenswrapper[4689]: I1210 12:51:18.512747 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c069c181-462b-4b74-8a81-a395cdff6d66" path="/var/lib/kubelet/pods/c069c181-462b-4b74-8a81-a395cdff6d66/volumes" Dec 10 12:51:25 crc kubenswrapper[4689]: I1210 12:51:25.488826 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-fw4x8_1f684039-0afb-44b1-a206-83d818ab3f9b/kube-rbac-proxy/0.log" Dec 10 12:51:25 crc kubenswrapper[4689]: I1210 12:51:25.589345 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-fw4x8_1f684039-0afb-44b1-a206-83d818ab3f9b/controller/0.log" Dec 10 12:51:25 crc kubenswrapper[4689]: I1210 12:51:25.665381 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/cp-frr-files/0.log" Dec 10 12:51:25 crc kubenswrapper[4689]: I1210 12:51:25.793284 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/cp-frr-files/0.log" Dec 10 12:51:25 crc kubenswrapper[4689]: I1210 12:51:25.801272 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/cp-reloader/0.log" Dec 10 12:51:25 crc kubenswrapper[4689]: I1210 12:51:25.847677 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/cp-metrics/0.log" Dec 10 12:51:25 crc kubenswrapper[4689]: I1210 12:51:25.860615 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/cp-reloader/0.log" Dec 10 12:51:26 crc kubenswrapper[4689]: I1210 12:51:26.072952 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/cp-reloader/0.log" Dec 10 12:51:26 crc kubenswrapper[4689]: I1210 12:51:26.086302 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/cp-metrics/0.log" Dec 10 12:51:26 crc kubenswrapper[4689]: I1210 12:51:26.092040 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/cp-metrics/0.log" Dec 10 12:51:26 crc kubenswrapper[4689]: I1210 12:51:26.092447 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/cp-frr-files/0.log" Dec 10 12:51:26 crc kubenswrapper[4689]: I1210 12:51:26.270354 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/cp-metrics/0.log" Dec 10 12:51:26 crc kubenswrapper[4689]: I1210 12:51:26.285799 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/controller/0.log" Dec 10 12:51:26 crc kubenswrapper[4689]: I1210 12:51:26.290863 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/cp-frr-files/0.log" Dec 10 12:51:26 crc kubenswrapper[4689]: I1210 12:51:26.292460 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/cp-reloader/0.log" Dec 10 12:51:26 crc kubenswrapper[4689]: I1210 12:51:26.495856 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/frr-metrics/0.log" Dec 10 12:51:26 crc kubenswrapper[4689]: I1210 12:51:26.500073 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/kube-rbac-proxy/0.log" Dec 10 12:51:26 crc kubenswrapper[4689]: I1210 12:51:26.560542 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/kube-rbac-proxy-frr/0.log" Dec 10 12:51:26 crc kubenswrapper[4689]: I1210 12:51:26.697434 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/reloader/0.log" Dec 10 12:51:26 crc kubenswrapper[4689]: I1210 12:51:26.838916 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-9pghn_da40a320-7203-4d3b-bc0e-c9eb09a07898/frr-k8s-webhook-server/0.log" Dec 10 12:51:26 crc kubenswrapper[4689]: I1210 12:51:26.998340 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-69d5767795-46q9h_5c50fea7-15a9-4027-9f2f-14c3744c7533/manager/0.log" Dec 10 12:51:27 crc kubenswrapper[4689]: I1210 12:51:27.130125 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5d9dc5cf54-24c7k_9d6f0612-90c7-4c71-a751-850ab873444a/webhook-server/0.log" Dec 10 12:51:27 crc kubenswrapper[4689]: I1210 12:51:27.226824 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-xx5ts_b8a0c114-08fb-4583-a9eb-09333023b0ed/kube-rbac-proxy/0.log" Dec 10 12:51:27 crc kubenswrapper[4689]: I1210 12:51:27.318806 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/frr/0.log" Dec 10 12:51:27 crc kubenswrapper[4689]: I1210 12:51:27.657552 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-xx5ts_b8a0c114-08fb-4583-a9eb-09333023b0ed/speaker/0.log" Dec 10 12:51:37 crc kubenswrapper[4689]: I1210 12:51:37.167417 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 12:51:37 crc kubenswrapper[4689]: I1210 12:51:37.168088 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 12:51:40 crc kubenswrapper[4689]: I1210 12:51:40.293077 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww_0d799972-d396-4dd5-8186-87f0a51ea145/util/0.log" Dec 10 12:51:40 crc kubenswrapper[4689]: I1210 12:51:40.664845 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww_0d799972-d396-4dd5-8186-87f0a51ea145/util/0.log" Dec 10 12:51:40 crc kubenswrapper[4689]: I1210 12:51:40.751881 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww_0d799972-d396-4dd5-8186-87f0a51ea145/pull/0.log" Dec 10 12:51:40 crc kubenswrapper[4689]: I1210 12:51:40.752071 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww_0d799972-d396-4dd5-8186-87f0a51ea145/pull/0.log" Dec 10 12:51:40 crc kubenswrapper[4689]: I1210 12:51:40.828204 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww_0d799972-d396-4dd5-8186-87f0a51ea145/util/0.log" Dec 10 12:51:40 crc kubenswrapper[4689]: I1210 12:51:40.892961 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww_0d799972-d396-4dd5-8186-87f0a51ea145/pull/0.log" Dec 10 12:51:40 crc kubenswrapper[4689]: I1210 12:51:40.960720 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww_0d799972-d396-4dd5-8186-87f0a51ea145/extract/0.log" Dec 10 12:51:41 crc kubenswrapper[4689]: I1210 12:51:41.043941 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd_c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb/util/0.log" Dec 10 12:51:41 crc kubenswrapper[4689]: I1210 12:51:41.233959 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd_c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb/util/0.log" Dec 10 12:51:41 crc kubenswrapper[4689]: I1210 12:51:41.244708 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd_c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb/pull/0.log" Dec 10 12:51:41 crc kubenswrapper[4689]: I1210 12:51:41.251837 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd_c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb/pull/0.log" Dec 10 12:51:41 crc kubenswrapper[4689]: I1210 12:51:41.450868 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd_c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb/pull/0.log" Dec 10 12:51:41 crc kubenswrapper[4689]: I1210 12:51:41.452774 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd_c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb/extract/0.log" Dec 10 12:51:41 crc kubenswrapper[4689]: I1210 12:51:41.464425 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd_c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb/util/0.log" Dec 10 12:51:41 crc kubenswrapper[4689]: I1210 12:51:41.605032 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jqvbb_4cace366-d916-470c-9cb6-090b7ed04bcb/extract-utilities/0.log" Dec 10 12:51:41 crc kubenswrapper[4689]: I1210 12:51:41.826274 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jqvbb_4cace366-d916-470c-9cb6-090b7ed04bcb/extract-content/0.log" Dec 10 12:51:41 crc kubenswrapper[4689]: I1210 12:51:41.835281 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jqvbb_4cace366-d916-470c-9cb6-090b7ed04bcb/extract-utilities/0.log" Dec 10 12:51:41 crc kubenswrapper[4689]: I1210 12:51:41.836542 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jqvbb_4cace366-d916-470c-9cb6-090b7ed04bcb/extract-content/0.log" Dec 10 12:51:42 crc kubenswrapper[4689]: I1210 12:51:42.057702 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jqvbb_4cace366-d916-470c-9cb6-090b7ed04bcb/extract-content/0.log" Dec 10 12:51:42 crc kubenswrapper[4689]: I1210 12:51:42.066316 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jqvbb_4cace366-d916-470c-9cb6-090b7ed04bcb/extract-utilities/0.log" Dec 10 12:51:42 crc kubenswrapper[4689]: I1210 12:51:42.244291 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vj842_f8b91596-d292-4ef0-bb0d-92cb3224c20c/extract-utilities/0.log" Dec 10 12:51:42 crc kubenswrapper[4689]: I1210 12:51:42.397178 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jqvbb_4cace366-d916-470c-9cb6-090b7ed04bcb/registry-server/0.log" Dec 10 12:51:42 crc kubenswrapper[4689]: I1210 12:51:42.459454 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vj842_f8b91596-d292-4ef0-bb0d-92cb3224c20c/extract-content/0.log" Dec 10 12:51:42 crc kubenswrapper[4689]: I1210 12:51:42.475285 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vj842_f8b91596-d292-4ef0-bb0d-92cb3224c20c/extract-content/0.log" Dec 10 12:51:42 crc kubenswrapper[4689]: I1210 12:51:42.491087 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vj842_f8b91596-d292-4ef0-bb0d-92cb3224c20c/extract-utilities/0.log" Dec 10 12:51:42 crc kubenswrapper[4689]: I1210 12:51:42.643589 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vj842_f8b91596-d292-4ef0-bb0d-92cb3224c20c/extract-content/0.log" Dec 10 12:51:42 crc kubenswrapper[4689]: I1210 12:51:42.651727 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vj842_f8b91596-d292-4ef0-bb0d-92cb3224c20c/extract-utilities/0.log" Dec 10 12:51:42 crc kubenswrapper[4689]: I1210 12:51:42.873443 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ckfqs_2fe57bc1-cf21-44d7-b6ca-54319a03a415/marketplace-operator/0.log" Dec 10 12:51:42 crc kubenswrapper[4689]: I1210 12:51:42.988396 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vj842_f8b91596-d292-4ef0-bb0d-92cb3224c20c/registry-server/0.log" Dec 10 12:51:43 crc kubenswrapper[4689]: I1210 12:51:43.060172 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9w8ct_2063bbc8-0509-4956-8dc9-84e8469be8f9/extract-utilities/0.log" Dec 10 12:51:43 crc kubenswrapper[4689]: I1210 12:51:43.185452 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9w8ct_2063bbc8-0509-4956-8dc9-84e8469be8f9/extract-utilities/0.log" Dec 10 12:51:43 crc kubenswrapper[4689]: I1210 12:51:43.191399 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9w8ct_2063bbc8-0509-4956-8dc9-84e8469be8f9/extract-content/0.log" Dec 10 12:51:43 crc kubenswrapper[4689]: I1210 12:51:43.196832 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9w8ct_2063bbc8-0509-4956-8dc9-84e8469be8f9/extract-content/0.log" Dec 10 12:51:43 crc kubenswrapper[4689]: I1210 12:51:43.356400 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9w8ct_2063bbc8-0509-4956-8dc9-84e8469be8f9/extract-content/0.log" Dec 10 12:51:43 crc kubenswrapper[4689]: I1210 12:51:43.428048 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9w8ct_2063bbc8-0509-4956-8dc9-84e8469be8f9/extract-utilities/0.log" Dec 10 12:51:43 crc kubenswrapper[4689]: I1210 12:51:43.460640 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9w8ct_2063bbc8-0509-4956-8dc9-84e8469be8f9/registry-server/0.log" Dec 10 12:51:43 crc kubenswrapper[4689]: I1210 12:51:43.577077 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8844c_6c80fa47-76de-4730-aa6b-85bab40be273/extract-utilities/0.log" Dec 10 12:51:43 crc kubenswrapper[4689]: I1210 12:51:43.753803 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8844c_6c80fa47-76de-4730-aa6b-85bab40be273/extract-content/0.log" Dec 10 12:51:43 crc kubenswrapper[4689]: I1210 12:51:43.772421 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8844c_6c80fa47-76de-4730-aa6b-85bab40be273/extract-utilities/0.log" Dec 10 12:51:43 crc kubenswrapper[4689]: I1210 12:51:43.802919 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8844c_6c80fa47-76de-4730-aa6b-85bab40be273/extract-content/0.log" Dec 10 12:51:43 crc kubenswrapper[4689]: I1210 12:51:43.963305 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8844c_6c80fa47-76de-4730-aa6b-85bab40be273/extract-utilities/0.log" Dec 10 12:51:43 crc kubenswrapper[4689]: I1210 12:51:43.981758 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8844c_6c80fa47-76de-4730-aa6b-85bab40be273/extract-content/0.log" Dec 10 12:51:44 crc kubenswrapper[4689]: I1210 12:51:44.312017 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8844c_6c80fa47-76de-4730-aa6b-85bab40be273/registry-server/0.log" Dec 10 12:52:07 crc kubenswrapper[4689]: I1210 12:52:07.166611 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 12:52:07 crc kubenswrapper[4689]: I1210 12:52:07.167292 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 12:52:31 crc kubenswrapper[4689]: I1210 12:52:31.022136 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9wjtp"] Dec 10 12:52:31 crc kubenswrapper[4689]: E1210 12:52:31.023328 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c069c181-462b-4b74-8a81-a395cdff6d66" containerName="extract-utilities" Dec 10 12:52:31 crc kubenswrapper[4689]: I1210 12:52:31.023352 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c069c181-462b-4b74-8a81-a395cdff6d66" containerName="extract-utilities" Dec 10 12:52:31 crc kubenswrapper[4689]: E1210 12:52:31.023386 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c069c181-462b-4b74-8a81-a395cdff6d66" containerName="extract-content" Dec 10 12:52:31 crc kubenswrapper[4689]: I1210 12:52:31.023399 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c069c181-462b-4b74-8a81-a395cdff6d66" containerName="extract-content" Dec 10 12:52:31 crc kubenswrapper[4689]: E1210 12:52:31.023424 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5900124f-e59d-4579-a1a9-09fb9f5c31a2" containerName="extract-utilities" Dec 10 12:52:31 crc kubenswrapper[4689]: I1210 12:52:31.023437 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="5900124f-e59d-4579-a1a9-09fb9f5c31a2" containerName="extract-utilities" Dec 10 12:52:31 crc kubenswrapper[4689]: E1210 12:52:31.023487 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9fb148a-68c0-465f-a71c-ccf40bca7940" containerName="extract-utilities" Dec 10 12:52:31 crc kubenswrapper[4689]: I1210 12:52:31.023500 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9fb148a-68c0-465f-a71c-ccf40bca7940" containerName="extract-utilities" Dec 10 12:52:31 crc kubenswrapper[4689]: E1210 12:52:31.023541 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5900124f-e59d-4579-a1a9-09fb9f5c31a2" containerName="registry-server" Dec 10 12:52:31 crc kubenswrapper[4689]: I1210 12:52:31.023554 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="5900124f-e59d-4579-a1a9-09fb9f5c31a2" containerName="registry-server" Dec 10 12:52:31 crc kubenswrapper[4689]: E1210 12:52:31.023584 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9fb148a-68c0-465f-a71c-ccf40bca7940" containerName="extract-content" Dec 10 12:52:31 crc kubenswrapper[4689]: I1210 12:52:31.023597 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9fb148a-68c0-465f-a71c-ccf40bca7940" containerName="extract-content" Dec 10 12:52:31 crc kubenswrapper[4689]: E1210 12:52:31.023619 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c069c181-462b-4b74-8a81-a395cdff6d66" containerName="registry-server" Dec 10 12:52:31 crc kubenswrapper[4689]: I1210 12:52:31.023631 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c069c181-462b-4b74-8a81-a395cdff6d66" containerName="registry-server" Dec 10 12:52:31 crc kubenswrapper[4689]: E1210 12:52:31.023646 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9fb148a-68c0-465f-a71c-ccf40bca7940" containerName="registry-server" Dec 10 12:52:31 crc kubenswrapper[4689]: I1210 12:52:31.023658 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9fb148a-68c0-465f-a71c-ccf40bca7940" containerName="registry-server" Dec 10 12:52:31 crc kubenswrapper[4689]: E1210 12:52:31.023686 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5900124f-e59d-4579-a1a9-09fb9f5c31a2" containerName="extract-content" Dec 10 12:52:31 crc kubenswrapper[4689]: I1210 12:52:31.023698 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="5900124f-e59d-4579-a1a9-09fb9f5c31a2" containerName="extract-content" Dec 10 12:52:31 crc kubenswrapper[4689]: I1210 12:52:31.024059 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="5900124f-e59d-4579-a1a9-09fb9f5c31a2" containerName="registry-server" Dec 10 12:52:31 crc kubenswrapper[4689]: I1210 12:52:31.024087 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9fb148a-68c0-465f-a71c-ccf40bca7940" containerName="registry-server" Dec 10 12:52:31 crc kubenswrapper[4689]: I1210 12:52:31.024109 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="c069c181-462b-4b74-8a81-a395cdff6d66" containerName="registry-server" Dec 10 12:52:31 crc kubenswrapper[4689]: I1210 12:52:31.026503 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9wjtp" Dec 10 12:52:31 crc kubenswrapper[4689]: I1210 12:52:31.067060 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9wjtp"] Dec 10 12:52:31 crc kubenswrapper[4689]: I1210 12:52:31.096403 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghb8c\" (UniqueName: \"kubernetes.io/projected/557f01fb-8032-4d30-b987-9c514445498f-kube-api-access-ghb8c\") pod \"certified-operators-9wjtp\" (UID: \"557f01fb-8032-4d30-b987-9c514445498f\") " pod="openshift-marketplace/certified-operators-9wjtp" Dec 10 12:52:31 crc kubenswrapper[4689]: I1210 12:52:31.096470 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/557f01fb-8032-4d30-b987-9c514445498f-utilities\") pod \"certified-operators-9wjtp\" (UID: \"557f01fb-8032-4d30-b987-9c514445498f\") " pod="openshift-marketplace/certified-operators-9wjtp" Dec 10 12:52:31 crc kubenswrapper[4689]: I1210 12:52:31.096645 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/557f01fb-8032-4d30-b987-9c514445498f-catalog-content\") pod \"certified-operators-9wjtp\" (UID: \"557f01fb-8032-4d30-b987-9c514445498f\") " pod="openshift-marketplace/certified-operators-9wjtp" Dec 10 12:52:31 crc kubenswrapper[4689]: I1210 12:52:31.198162 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghb8c\" (UniqueName: \"kubernetes.io/projected/557f01fb-8032-4d30-b987-9c514445498f-kube-api-access-ghb8c\") pod \"certified-operators-9wjtp\" (UID: \"557f01fb-8032-4d30-b987-9c514445498f\") " pod="openshift-marketplace/certified-operators-9wjtp" Dec 10 12:52:31 crc kubenswrapper[4689]: I1210 12:52:31.198390 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/557f01fb-8032-4d30-b987-9c514445498f-utilities\") pod \"certified-operators-9wjtp\" (UID: \"557f01fb-8032-4d30-b987-9c514445498f\") " pod="openshift-marketplace/certified-operators-9wjtp" Dec 10 12:52:31 crc kubenswrapper[4689]: I1210 12:52:31.198578 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/557f01fb-8032-4d30-b987-9c514445498f-catalog-content\") pod \"certified-operators-9wjtp\" (UID: \"557f01fb-8032-4d30-b987-9c514445498f\") " pod="openshift-marketplace/certified-operators-9wjtp" Dec 10 12:52:31 crc kubenswrapper[4689]: I1210 12:52:31.199204 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/557f01fb-8032-4d30-b987-9c514445498f-utilities\") pod \"certified-operators-9wjtp\" (UID: \"557f01fb-8032-4d30-b987-9c514445498f\") " pod="openshift-marketplace/certified-operators-9wjtp" Dec 10 12:52:31 crc kubenswrapper[4689]: I1210 12:52:31.199310 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/557f01fb-8032-4d30-b987-9c514445498f-catalog-content\") pod \"certified-operators-9wjtp\" (UID: \"557f01fb-8032-4d30-b987-9c514445498f\") " pod="openshift-marketplace/certified-operators-9wjtp" Dec 10 12:52:31 crc kubenswrapper[4689]: I1210 12:52:31.225325 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghb8c\" (UniqueName: \"kubernetes.io/projected/557f01fb-8032-4d30-b987-9c514445498f-kube-api-access-ghb8c\") pod \"certified-operators-9wjtp\" (UID: \"557f01fb-8032-4d30-b987-9c514445498f\") " pod="openshift-marketplace/certified-operators-9wjtp" Dec 10 12:52:31 crc kubenswrapper[4689]: I1210 12:52:31.360895 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9wjtp" Dec 10 12:52:31 crc kubenswrapper[4689]: I1210 12:52:31.852982 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9wjtp"] Dec 10 12:52:32 crc kubenswrapper[4689]: I1210 12:52:32.044469 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wjtp" event={"ID":"557f01fb-8032-4d30-b987-9c514445498f","Type":"ContainerStarted","Data":"c51c9261a2e8c3e971cf8a6bef1171264ee1df957b78db6bd004cbba534d41ef"} Dec 10 12:52:33 crc kubenswrapper[4689]: I1210 12:52:33.072634 4689 generic.go:334] "Generic (PLEG): container finished" podID="557f01fb-8032-4d30-b987-9c514445498f" containerID="04a7fe76feb87dbe00802a69c6e666110679531233d743263f1c2edfcdd57675" exitCode=0 Dec 10 12:52:33 crc kubenswrapper[4689]: I1210 12:52:33.072770 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wjtp" event={"ID":"557f01fb-8032-4d30-b987-9c514445498f","Type":"ContainerDied","Data":"04a7fe76feb87dbe00802a69c6e666110679531233d743263f1c2edfcdd57675"} Dec 10 12:52:37 crc kubenswrapper[4689]: I1210 12:52:37.139614 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wjtp" event={"ID":"557f01fb-8032-4d30-b987-9c514445498f","Type":"ContainerStarted","Data":"44de6e74eaba575e4109666c7d91fabd7563214b4d306e40affdb9fe13fa55f3"} Dec 10 12:52:37 crc kubenswrapper[4689]: I1210 12:52:37.171037 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 12:52:37 crc kubenswrapper[4689]: I1210 12:52:37.171133 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 12:52:37 crc kubenswrapper[4689]: I1210 12:52:37.171222 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" Dec 10 12:52:37 crc kubenswrapper[4689]: I1210 12:52:37.173945 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ce6f9dfd50adadcec59ac06d9a711336ee8675f152d7050ec5daf10842869d64"} pod="openshift-machine-config-operator/machine-config-daemon-db6zk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 12:52:37 crc kubenswrapper[4689]: I1210 12:52:37.174227 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" containerID="cri-o://ce6f9dfd50adadcec59ac06d9a711336ee8675f152d7050ec5daf10842869d64" gracePeriod=600 Dec 10 12:52:37 crc kubenswrapper[4689]: E1210 12:52:37.312877 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:52:38 crc kubenswrapper[4689]: I1210 12:52:38.152264 4689 generic.go:334] "Generic (PLEG): container finished" podID="557f01fb-8032-4d30-b987-9c514445498f" containerID="44de6e74eaba575e4109666c7d91fabd7563214b4d306e40affdb9fe13fa55f3" exitCode=0 Dec 10 12:52:38 crc kubenswrapper[4689]: I1210 12:52:38.152572 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wjtp" event={"ID":"557f01fb-8032-4d30-b987-9c514445498f","Type":"ContainerDied","Data":"44de6e74eaba575e4109666c7d91fabd7563214b4d306e40affdb9fe13fa55f3"} Dec 10 12:52:38 crc kubenswrapper[4689]: I1210 12:52:38.156668 4689 generic.go:334] "Generic (PLEG): container finished" podID="a41ebdcd-910f-4669-992d-296e1a92162b" containerID="ce6f9dfd50adadcec59ac06d9a711336ee8675f152d7050ec5daf10842869d64" exitCode=0 Dec 10 12:52:38 crc kubenswrapper[4689]: I1210 12:52:38.156708 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" event={"ID":"a41ebdcd-910f-4669-992d-296e1a92162b","Type":"ContainerDied","Data":"ce6f9dfd50adadcec59ac06d9a711336ee8675f152d7050ec5daf10842869d64"} Dec 10 12:52:38 crc kubenswrapper[4689]: I1210 12:52:38.156740 4689 scope.go:117] "RemoveContainer" containerID="5d1e2ae8645d18ab83e20a9f28e88007d921095b25a0a5bebc871640151a35c6" Dec 10 12:52:38 crc kubenswrapper[4689]: I1210 12:52:38.157491 4689 scope.go:117] "RemoveContainer" containerID="ce6f9dfd50adadcec59ac06d9a711336ee8675f152d7050ec5daf10842869d64" Dec 10 12:52:38 crc kubenswrapper[4689]: E1210 12:52:38.157966 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:52:39 crc kubenswrapper[4689]: I1210 12:52:39.170961 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wjtp" event={"ID":"557f01fb-8032-4d30-b987-9c514445498f","Type":"ContainerStarted","Data":"62c0d888400440cd960aaadc37def124a6de54adb5172e5bcd4864487fef4d38"} Dec 10 12:52:39 crc kubenswrapper[4689]: I1210 12:52:39.205508 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9wjtp" podStartSLOduration=3.546673817 podStartE2EDuration="9.205487009s" podCreationTimestamp="2025-12-10 12:52:30 +0000 UTC" firstStartedPulling="2025-12-10 12:52:33.075343674 +0000 UTC m=+2220.863424822" lastFinishedPulling="2025-12-10 12:52:38.734156876 +0000 UTC m=+2226.522238014" observedRunningTime="2025-12-10 12:52:39.195261998 +0000 UTC m=+2226.983343146" watchObservedRunningTime="2025-12-10 12:52:39.205487009 +0000 UTC m=+2226.993568157" Dec 10 12:52:41 crc kubenswrapper[4689]: I1210 12:52:41.361612 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9wjtp" Dec 10 12:52:41 crc kubenswrapper[4689]: I1210 12:52:41.363835 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9wjtp" Dec 10 12:52:41 crc kubenswrapper[4689]: I1210 12:52:41.448715 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9wjtp" Dec 10 12:52:43 crc kubenswrapper[4689]: I1210 12:52:43.258941 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9wjtp" Dec 10 12:52:43 crc kubenswrapper[4689]: I1210 12:52:43.466304 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9wjtp"] Dec 10 12:52:43 crc kubenswrapper[4689]: I1210 12:52:43.489616 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jqvbb"] Dec 10 12:52:43 crc kubenswrapper[4689]: I1210 12:52:43.489911 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jqvbb" podUID="4cace366-d916-470c-9cb6-090b7ed04bcb" containerName="registry-server" containerID="cri-o://b638ce37035f162fa5247f0ab3b11723ac902f0eeb4af097c04096fec936801a" gracePeriod=2 Dec 10 12:52:45 crc kubenswrapper[4689]: I1210 12:52:45.228616 4689 generic.go:334] "Generic (PLEG): container finished" podID="4cace366-d916-470c-9cb6-090b7ed04bcb" containerID="b638ce37035f162fa5247f0ab3b11723ac902f0eeb4af097c04096fec936801a" exitCode=0 Dec 10 12:52:45 crc kubenswrapper[4689]: I1210 12:52:45.228688 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqvbb" event={"ID":"4cace366-d916-470c-9cb6-090b7ed04bcb","Type":"ContainerDied","Data":"b638ce37035f162fa5247f0ab3b11723ac902f0eeb4af097c04096fec936801a"} Dec 10 12:52:45 crc kubenswrapper[4689]: I1210 12:52:45.793241 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqvbb" Dec 10 12:52:45 crc kubenswrapper[4689]: I1210 12:52:45.923366 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cace366-d916-470c-9cb6-090b7ed04bcb-catalog-content\") pod \"4cace366-d916-470c-9cb6-090b7ed04bcb\" (UID: \"4cace366-d916-470c-9cb6-090b7ed04bcb\") " Dec 10 12:52:45 crc kubenswrapper[4689]: I1210 12:52:45.923452 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cace366-d916-470c-9cb6-090b7ed04bcb-utilities\") pod \"4cace366-d916-470c-9cb6-090b7ed04bcb\" (UID: \"4cace366-d916-470c-9cb6-090b7ed04bcb\") " Dec 10 12:52:45 crc kubenswrapper[4689]: I1210 12:52:45.923734 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-497jk\" (UniqueName: \"kubernetes.io/projected/4cace366-d916-470c-9cb6-090b7ed04bcb-kube-api-access-497jk\") pod \"4cace366-d916-470c-9cb6-090b7ed04bcb\" (UID: \"4cace366-d916-470c-9cb6-090b7ed04bcb\") " Dec 10 12:52:45 crc kubenswrapper[4689]: I1210 12:52:45.924417 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cace366-d916-470c-9cb6-090b7ed04bcb-utilities" (OuterVolumeSpecName: "utilities") pod "4cace366-d916-470c-9cb6-090b7ed04bcb" (UID: "4cace366-d916-470c-9cb6-090b7ed04bcb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:52:45 crc kubenswrapper[4689]: I1210 12:52:45.942134 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cace366-d916-470c-9cb6-090b7ed04bcb-kube-api-access-497jk" (OuterVolumeSpecName: "kube-api-access-497jk") pod "4cace366-d916-470c-9cb6-090b7ed04bcb" (UID: "4cace366-d916-470c-9cb6-090b7ed04bcb"). InnerVolumeSpecName "kube-api-access-497jk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:52:45 crc kubenswrapper[4689]: I1210 12:52:45.989447 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cace366-d916-470c-9cb6-090b7ed04bcb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4cace366-d916-470c-9cb6-090b7ed04bcb" (UID: "4cace366-d916-470c-9cb6-090b7ed04bcb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:52:45 crc kubenswrapper[4689]: I1210 12:52:45.993459 4689 scope.go:117] "RemoveContainer" containerID="4da0853146910c44f836c80492bc5322baa545fb65c3d3f67d859877e3bc9cc4" Dec 10 12:52:46 crc kubenswrapper[4689]: I1210 12:52:46.027989 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-497jk\" (UniqueName: \"kubernetes.io/projected/4cace366-d916-470c-9cb6-090b7ed04bcb-kube-api-access-497jk\") on node \"crc\" DevicePath \"\"" Dec 10 12:52:46 crc kubenswrapper[4689]: I1210 12:52:46.028032 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cace366-d916-470c-9cb6-090b7ed04bcb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 12:52:46 crc kubenswrapper[4689]: I1210 12:52:46.028042 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cace366-d916-470c-9cb6-090b7ed04bcb-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 12:52:46 crc kubenswrapper[4689]: I1210 12:52:46.028901 4689 scope.go:117] "RemoveContainer" containerID="9c6ffe09a52b9615b577d2c63de91920909dfc492833e4b6dcdb82761508dc82" Dec 10 12:52:46 crc kubenswrapper[4689]: I1210 12:52:46.063645 4689 scope.go:117] "RemoveContainer" containerID="b638ce37035f162fa5247f0ab3b11723ac902f0eeb4af097c04096fec936801a" Dec 10 12:52:46 crc kubenswrapper[4689]: I1210 12:52:46.241418 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqvbb" Dec 10 12:52:46 crc kubenswrapper[4689]: I1210 12:52:46.242106 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqvbb" event={"ID":"4cace366-d916-470c-9cb6-090b7ed04bcb","Type":"ContainerDied","Data":"4110cfef44e51074ee72471a99e80ec6cd59441b4ffff7f0bd73a403dc056bff"} Dec 10 12:52:46 crc kubenswrapper[4689]: I1210 12:52:46.289254 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jqvbb"] Dec 10 12:52:46 crc kubenswrapper[4689]: I1210 12:52:46.302226 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jqvbb"] Dec 10 12:52:46 crc kubenswrapper[4689]: I1210 12:52:46.507689 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cace366-d916-470c-9cb6-090b7ed04bcb" path="/var/lib/kubelet/pods/4cace366-d916-470c-9cb6-090b7ed04bcb/volumes" Dec 10 12:52:49 crc kubenswrapper[4689]: I1210 12:52:49.498706 4689 scope.go:117] "RemoveContainer" containerID="ce6f9dfd50adadcec59ac06d9a711336ee8675f152d7050ec5daf10842869d64" Dec 10 12:52:49 crc kubenswrapper[4689]: E1210 12:52:49.499232 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:53:01 crc kubenswrapper[4689]: I1210 12:53:01.499275 4689 scope.go:117] "RemoveContainer" containerID="ce6f9dfd50adadcec59ac06d9a711336ee8675f152d7050ec5daf10842869d64" Dec 10 12:53:01 crc kubenswrapper[4689]: E1210 12:53:01.500231 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:53:13 crc kubenswrapper[4689]: I1210 12:53:13.499448 4689 scope.go:117] "RemoveContainer" containerID="ce6f9dfd50adadcec59ac06d9a711336ee8675f152d7050ec5daf10842869d64" Dec 10 12:53:13 crc kubenswrapper[4689]: E1210 12:53:13.500218 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:53:15 crc kubenswrapper[4689]: I1210 12:53:15.537857 4689 generic.go:334] "Generic (PLEG): container finished" podID="c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4" containerID="1d47a6c4bcb76e065def1fa1e8e7b3d8852d84a02090144f72d9fa2c2ddd352f" exitCode=0 Dec 10 12:53:15 crc kubenswrapper[4689]: I1210 12:53:15.537955 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-grfmr/must-gather-dckb7" event={"ID":"c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4","Type":"ContainerDied","Data":"1d47a6c4bcb76e065def1fa1e8e7b3d8852d84a02090144f72d9fa2c2ddd352f"} Dec 10 12:53:15 crc kubenswrapper[4689]: I1210 12:53:15.538773 4689 scope.go:117] "RemoveContainer" containerID="1d47a6c4bcb76e065def1fa1e8e7b3d8852d84a02090144f72d9fa2c2ddd352f" Dec 10 12:53:16 crc kubenswrapper[4689]: I1210 12:53:16.352899 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-grfmr_must-gather-dckb7_c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4/gather/0.log" Dec 10 12:53:23 crc kubenswrapper[4689]: I1210 12:53:23.933035 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-grfmr/must-gather-dckb7"] Dec 10 12:53:23 crc kubenswrapper[4689]: I1210 12:53:23.935470 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-grfmr/must-gather-dckb7" podUID="c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4" containerName="copy" containerID="cri-o://55cdc8c8dc6840a362fe6315f523ae05cf3bad1a23bcf4d825c9cd288c2da2d5" gracePeriod=2 Dec 10 12:53:23 crc kubenswrapper[4689]: I1210 12:53:23.943130 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-grfmr/must-gather-dckb7"] Dec 10 12:53:24 crc kubenswrapper[4689]: I1210 12:53:24.402715 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-grfmr_must-gather-dckb7_c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4/copy/0.log" Dec 10 12:53:24 crc kubenswrapper[4689]: I1210 12:53:24.403556 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-grfmr/must-gather-dckb7" Dec 10 12:53:24 crc kubenswrapper[4689]: I1210 12:53:24.561913 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4-must-gather-output\") pod \"c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4\" (UID: \"c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4\") " Dec 10 12:53:24 crc kubenswrapper[4689]: I1210 12:53:24.562008 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vvmt\" (UniqueName: \"kubernetes.io/projected/c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4-kube-api-access-6vvmt\") pod \"c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4\" (UID: \"c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4\") " Dec 10 12:53:24 crc kubenswrapper[4689]: I1210 12:53:24.568677 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4-kube-api-access-6vvmt" (OuterVolumeSpecName: "kube-api-access-6vvmt") pod "c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4" (UID: "c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4"). InnerVolumeSpecName "kube-api-access-6vvmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:53:24 crc kubenswrapper[4689]: I1210 12:53:24.618528 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-grfmr_must-gather-dckb7_c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4/copy/0.log" Dec 10 12:53:24 crc kubenswrapper[4689]: I1210 12:53:24.618891 4689 generic.go:334] "Generic (PLEG): container finished" podID="c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4" containerID="55cdc8c8dc6840a362fe6315f523ae05cf3bad1a23bcf4d825c9cd288c2da2d5" exitCode=143 Dec 10 12:53:24 crc kubenswrapper[4689]: I1210 12:53:24.618945 4689 scope.go:117] "RemoveContainer" containerID="55cdc8c8dc6840a362fe6315f523ae05cf3bad1a23bcf4d825c9cd288c2da2d5" Dec 10 12:53:24 crc kubenswrapper[4689]: I1210 12:53:24.619007 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-grfmr/must-gather-dckb7" Dec 10 12:53:24 crc kubenswrapper[4689]: I1210 12:53:24.640076 4689 scope.go:117] "RemoveContainer" containerID="1d47a6c4bcb76e065def1fa1e8e7b3d8852d84a02090144f72d9fa2c2ddd352f" Dec 10 12:53:24 crc kubenswrapper[4689]: I1210 12:53:24.665471 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vvmt\" (UniqueName: \"kubernetes.io/projected/c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4-kube-api-access-6vvmt\") on node \"crc\" DevicePath \"\"" Dec 10 12:53:24 crc kubenswrapper[4689]: I1210 12:53:24.705036 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4" (UID: "c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 12:53:24 crc kubenswrapper[4689]: I1210 12:53:24.711739 4689 scope.go:117] "RemoveContainer" containerID="55cdc8c8dc6840a362fe6315f523ae05cf3bad1a23bcf4d825c9cd288c2da2d5" Dec 10 12:53:24 crc kubenswrapper[4689]: E1210 12:53:24.712456 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55cdc8c8dc6840a362fe6315f523ae05cf3bad1a23bcf4d825c9cd288c2da2d5\": container with ID starting with 55cdc8c8dc6840a362fe6315f523ae05cf3bad1a23bcf4d825c9cd288c2da2d5 not found: ID does not exist" containerID="55cdc8c8dc6840a362fe6315f523ae05cf3bad1a23bcf4d825c9cd288c2da2d5" Dec 10 12:53:24 crc kubenswrapper[4689]: I1210 12:53:24.712504 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55cdc8c8dc6840a362fe6315f523ae05cf3bad1a23bcf4d825c9cd288c2da2d5"} err="failed to get container status \"55cdc8c8dc6840a362fe6315f523ae05cf3bad1a23bcf4d825c9cd288c2da2d5\": rpc error: code = NotFound desc = could not find container \"55cdc8c8dc6840a362fe6315f523ae05cf3bad1a23bcf4d825c9cd288c2da2d5\": container with ID starting with 55cdc8c8dc6840a362fe6315f523ae05cf3bad1a23bcf4d825c9cd288c2da2d5 not found: ID does not exist" Dec 10 12:53:24 crc kubenswrapper[4689]: I1210 12:53:24.712533 4689 scope.go:117] "RemoveContainer" containerID="1d47a6c4bcb76e065def1fa1e8e7b3d8852d84a02090144f72d9fa2c2ddd352f" Dec 10 12:53:24 crc kubenswrapper[4689]: E1210 12:53:24.712872 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d47a6c4bcb76e065def1fa1e8e7b3d8852d84a02090144f72d9fa2c2ddd352f\": container with ID starting with 1d47a6c4bcb76e065def1fa1e8e7b3d8852d84a02090144f72d9fa2c2ddd352f not found: ID does not exist" containerID="1d47a6c4bcb76e065def1fa1e8e7b3d8852d84a02090144f72d9fa2c2ddd352f" Dec 10 12:53:24 crc kubenswrapper[4689]: I1210 12:53:24.712903 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d47a6c4bcb76e065def1fa1e8e7b3d8852d84a02090144f72d9fa2c2ddd352f"} err="failed to get container status \"1d47a6c4bcb76e065def1fa1e8e7b3d8852d84a02090144f72d9fa2c2ddd352f\": rpc error: code = NotFound desc = could not find container \"1d47a6c4bcb76e065def1fa1e8e7b3d8852d84a02090144f72d9fa2c2ddd352f\": container with ID starting with 1d47a6c4bcb76e065def1fa1e8e7b3d8852d84a02090144f72d9fa2c2ddd352f not found: ID does not exist" Dec 10 12:53:24 crc kubenswrapper[4689]: I1210 12:53:24.767711 4689 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 10 12:53:26 crc kubenswrapper[4689]: I1210 12:53:26.502468 4689 scope.go:117] "RemoveContainer" containerID="ce6f9dfd50adadcec59ac06d9a711336ee8675f152d7050ec5daf10842869d64" Dec 10 12:53:26 crc kubenswrapper[4689]: E1210 12:53:26.502885 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:53:26 crc kubenswrapper[4689]: I1210 12:53:26.510487 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4" path="/var/lib/kubelet/pods/c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4/volumes" Dec 10 12:53:39 crc kubenswrapper[4689]: I1210 12:53:39.500149 4689 scope.go:117] "RemoveContainer" containerID="ce6f9dfd50adadcec59ac06d9a711336ee8675f152d7050ec5daf10842869d64" Dec 10 12:53:39 crc kubenswrapper[4689]: E1210 12:53:39.501356 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:53:51 crc kubenswrapper[4689]: I1210 12:53:51.498553 4689 scope.go:117] "RemoveContainer" containerID="ce6f9dfd50adadcec59ac06d9a711336ee8675f152d7050ec5daf10842869d64" Dec 10 12:53:51 crc kubenswrapper[4689]: E1210 12:53:51.499429 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:54:03 crc kubenswrapper[4689]: I1210 12:54:03.498482 4689 scope.go:117] "RemoveContainer" containerID="ce6f9dfd50adadcec59ac06d9a711336ee8675f152d7050ec5daf10842869d64" Dec 10 12:54:03 crc kubenswrapper[4689]: E1210 12:54:03.500273 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:54:16 crc kubenswrapper[4689]: I1210 12:54:16.498752 4689 scope.go:117] "RemoveContainer" containerID="ce6f9dfd50adadcec59ac06d9a711336ee8675f152d7050ec5daf10842869d64" Dec 10 12:54:16 crc kubenswrapper[4689]: E1210 12:54:16.499527 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:54:30 crc kubenswrapper[4689]: I1210 12:54:30.498743 4689 scope.go:117] "RemoveContainer" containerID="ce6f9dfd50adadcec59ac06d9a711336ee8675f152d7050ec5daf10842869d64" Dec 10 12:54:30 crc kubenswrapper[4689]: E1210 12:54:30.499501 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:54:42 crc kubenswrapper[4689]: I1210 12:54:42.506269 4689 scope.go:117] "RemoveContainer" containerID="ce6f9dfd50adadcec59ac06d9a711336ee8675f152d7050ec5daf10842869d64" Dec 10 12:54:42 crc kubenswrapper[4689]: E1210 12:54:42.507337 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:54:57 crc kubenswrapper[4689]: I1210 12:54:57.498850 4689 scope.go:117] "RemoveContainer" containerID="ce6f9dfd50adadcec59ac06d9a711336ee8675f152d7050ec5daf10842869d64" Dec 10 12:54:57 crc kubenswrapper[4689]: E1210 12:54:57.499713 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:55:10 crc kubenswrapper[4689]: I1210 12:55:10.497680 4689 scope.go:117] "RemoveContainer" containerID="ce6f9dfd50adadcec59ac06d9a711336ee8675f152d7050ec5daf10842869d64" Dec 10 12:55:10 crc kubenswrapper[4689]: E1210 12:55:10.498604 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:55:22 crc kubenswrapper[4689]: I1210 12:55:22.505364 4689 scope.go:117] "RemoveContainer" containerID="ce6f9dfd50adadcec59ac06d9a711336ee8675f152d7050ec5daf10842869d64" Dec 10 12:55:22 crc kubenswrapper[4689]: E1210 12:55:22.506130 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:55:35 crc kubenswrapper[4689]: I1210 12:55:35.499246 4689 scope.go:117] "RemoveContainer" containerID="ce6f9dfd50adadcec59ac06d9a711336ee8675f152d7050ec5daf10842869d64" Dec 10 12:55:35 crc kubenswrapper[4689]: E1210 12:55:35.499909 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:55:46 crc kubenswrapper[4689]: I1210 12:55:46.193011 4689 scope.go:117] "RemoveContainer" containerID="fa6af3bf10e9a449a42fdc81b7d1da7abc75c5fab17e1b778a7df26a079df499" Dec 10 12:55:48 crc kubenswrapper[4689]: I1210 12:55:48.499245 4689 scope.go:117] "RemoveContainer" containerID="ce6f9dfd50adadcec59ac06d9a711336ee8675f152d7050ec5daf10842869d64" Dec 10 12:55:48 crc kubenswrapper[4689]: E1210 12:55:48.499730 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:56:02 crc kubenswrapper[4689]: I1210 12:56:02.505414 4689 scope.go:117] "RemoveContainer" containerID="ce6f9dfd50adadcec59ac06d9a711336ee8675f152d7050ec5daf10842869d64" Dec 10 12:56:02 crc kubenswrapper[4689]: E1210 12:56:02.507387 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:56:13 crc kubenswrapper[4689]: I1210 12:56:13.498008 4689 scope.go:117] "RemoveContainer" containerID="ce6f9dfd50adadcec59ac06d9a711336ee8675f152d7050ec5daf10842869d64" Dec 10 12:56:13 crc kubenswrapper[4689]: E1210 12:56:13.499029 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:56:21 crc kubenswrapper[4689]: I1210 12:56:21.461229 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-w2nzt/must-gather-dd24r"] Dec 10 12:56:21 crc kubenswrapper[4689]: E1210 12:56:21.462249 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4" containerName="copy" Dec 10 12:56:21 crc kubenswrapper[4689]: I1210 12:56:21.462264 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4" containerName="copy" Dec 10 12:56:21 crc kubenswrapper[4689]: E1210 12:56:21.462281 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cace366-d916-470c-9cb6-090b7ed04bcb" containerName="registry-server" Dec 10 12:56:21 crc kubenswrapper[4689]: I1210 12:56:21.462291 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cace366-d916-470c-9cb6-090b7ed04bcb" containerName="registry-server" Dec 10 12:56:21 crc kubenswrapper[4689]: E1210 12:56:21.462312 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cace366-d916-470c-9cb6-090b7ed04bcb" containerName="extract-content" Dec 10 12:56:21 crc kubenswrapper[4689]: I1210 12:56:21.462320 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cace366-d916-470c-9cb6-090b7ed04bcb" containerName="extract-content" Dec 10 12:56:21 crc kubenswrapper[4689]: E1210 12:56:21.462335 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4" containerName="gather" Dec 10 12:56:21 crc kubenswrapper[4689]: I1210 12:56:21.462342 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4" containerName="gather" Dec 10 12:56:21 crc kubenswrapper[4689]: E1210 12:56:21.462410 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cace366-d916-470c-9cb6-090b7ed04bcb" containerName="extract-utilities" Dec 10 12:56:21 crc kubenswrapper[4689]: I1210 12:56:21.462418 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cace366-d916-470c-9cb6-090b7ed04bcb" containerName="extract-utilities" Dec 10 12:56:21 crc kubenswrapper[4689]: I1210 12:56:21.462662 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4" containerName="copy" Dec 10 12:56:21 crc kubenswrapper[4689]: I1210 12:56:21.462686 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6ddd7cf-0ec4-4f4d-a599-5a8620ebada4" containerName="gather" Dec 10 12:56:21 crc kubenswrapper[4689]: I1210 12:56:21.462696 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cace366-d916-470c-9cb6-090b7ed04bcb" containerName="registry-server" Dec 10 12:56:21 crc kubenswrapper[4689]: I1210 12:56:21.463941 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2nzt/must-gather-dd24r" Dec 10 12:56:21 crc kubenswrapper[4689]: I1210 12:56:21.466411 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-w2nzt"/"default-dockercfg-ntq57" Dec 10 12:56:21 crc kubenswrapper[4689]: I1210 12:56:21.466427 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-w2nzt"/"openshift-service-ca.crt" Dec 10 12:56:21 crc kubenswrapper[4689]: I1210 12:56:21.466523 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-w2nzt"/"kube-root-ca.crt" Dec 10 12:56:21 crc kubenswrapper[4689]: I1210 12:56:21.476150 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-w2nzt/must-gather-dd24r"] Dec 10 12:56:21 crc kubenswrapper[4689]: I1210 12:56:21.557841 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jlhj\" (UniqueName: \"kubernetes.io/projected/72ac4d6f-f8de-41e6-96fe-dccc36413b6d-kube-api-access-4jlhj\") pod \"must-gather-dd24r\" (UID: \"72ac4d6f-f8de-41e6-96fe-dccc36413b6d\") " pod="openshift-must-gather-w2nzt/must-gather-dd24r" Dec 10 12:56:21 crc kubenswrapper[4689]: I1210 12:56:21.557913 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/72ac4d6f-f8de-41e6-96fe-dccc36413b6d-must-gather-output\") pod \"must-gather-dd24r\" (UID: \"72ac4d6f-f8de-41e6-96fe-dccc36413b6d\") " pod="openshift-must-gather-w2nzt/must-gather-dd24r" Dec 10 12:56:21 crc kubenswrapper[4689]: I1210 12:56:21.659809 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/72ac4d6f-f8de-41e6-96fe-dccc36413b6d-must-gather-output\") pod \"must-gather-dd24r\" (UID: \"72ac4d6f-f8de-41e6-96fe-dccc36413b6d\") " pod="openshift-must-gather-w2nzt/must-gather-dd24r" Dec 10 12:56:21 crc kubenswrapper[4689]: I1210 12:56:21.660340 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/72ac4d6f-f8de-41e6-96fe-dccc36413b6d-must-gather-output\") pod \"must-gather-dd24r\" (UID: \"72ac4d6f-f8de-41e6-96fe-dccc36413b6d\") " pod="openshift-must-gather-w2nzt/must-gather-dd24r" Dec 10 12:56:21 crc kubenswrapper[4689]: I1210 12:56:21.660653 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jlhj\" (UniqueName: \"kubernetes.io/projected/72ac4d6f-f8de-41e6-96fe-dccc36413b6d-kube-api-access-4jlhj\") pod \"must-gather-dd24r\" (UID: \"72ac4d6f-f8de-41e6-96fe-dccc36413b6d\") " pod="openshift-must-gather-w2nzt/must-gather-dd24r" Dec 10 12:56:21 crc kubenswrapper[4689]: I1210 12:56:21.682320 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jlhj\" (UniqueName: \"kubernetes.io/projected/72ac4d6f-f8de-41e6-96fe-dccc36413b6d-kube-api-access-4jlhj\") pod \"must-gather-dd24r\" (UID: \"72ac4d6f-f8de-41e6-96fe-dccc36413b6d\") " pod="openshift-must-gather-w2nzt/must-gather-dd24r" Dec 10 12:56:21 crc kubenswrapper[4689]: I1210 12:56:21.786559 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2nzt/must-gather-dd24r" Dec 10 12:56:22 crc kubenswrapper[4689]: W1210 12:56:22.278699 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72ac4d6f_f8de_41e6_96fe_dccc36413b6d.slice/crio-66d6057b859ccebeffe7dea7ccffb44fb57c5204e48364a4dd95f71c64aa8193 WatchSource:0}: Error finding container 66d6057b859ccebeffe7dea7ccffb44fb57c5204e48364a4dd95f71c64aa8193: Status 404 returned error can't find the container with id 66d6057b859ccebeffe7dea7ccffb44fb57c5204e48364a4dd95f71c64aa8193 Dec 10 12:56:22 crc kubenswrapper[4689]: I1210 12:56:22.280071 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-w2nzt/must-gather-dd24r"] Dec 10 12:56:22 crc kubenswrapper[4689]: I1210 12:56:22.334951 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w2nzt/must-gather-dd24r" event={"ID":"72ac4d6f-f8de-41e6-96fe-dccc36413b6d","Type":"ContainerStarted","Data":"66d6057b859ccebeffe7dea7ccffb44fb57c5204e48364a4dd95f71c64aa8193"} Dec 10 12:56:23 crc kubenswrapper[4689]: I1210 12:56:23.344116 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w2nzt/must-gather-dd24r" event={"ID":"72ac4d6f-f8de-41e6-96fe-dccc36413b6d","Type":"ContainerStarted","Data":"fe5a6ab8a70d3a58cb08368dbcaa63e17cc12297cd009e8529679ca260e66730"} Dec 10 12:56:23 crc kubenswrapper[4689]: I1210 12:56:23.344434 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w2nzt/must-gather-dd24r" event={"ID":"72ac4d6f-f8de-41e6-96fe-dccc36413b6d","Type":"ContainerStarted","Data":"d8af23709ce50ec6ec15f8a2edecfc467de0e72e321eb13c05fb600fb08be720"} Dec 10 12:56:23 crc kubenswrapper[4689]: I1210 12:56:23.376110 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-w2nzt/must-gather-dd24r" podStartSLOduration=2.376091922 podStartE2EDuration="2.376091922s" podCreationTimestamp="2025-12-10 12:56:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:56:23.369498941 +0000 UTC m=+2451.157580079" watchObservedRunningTime="2025-12-10 12:56:23.376091922 +0000 UTC m=+2451.164173060" Dec 10 12:56:26 crc kubenswrapper[4689]: I1210 12:56:26.025043 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-w2nzt/crc-debug-wrhx8"] Dec 10 12:56:26 crc kubenswrapper[4689]: I1210 12:56:26.026805 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2nzt/crc-debug-wrhx8" Dec 10 12:56:26 crc kubenswrapper[4689]: I1210 12:56:26.149871 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4367c682-5462-4098-abf1-0cbb4ae8f28b-host\") pod \"crc-debug-wrhx8\" (UID: \"4367c682-5462-4098-abf1-0cbb4ae8f28b\") " pod="openshift-must-gather-w2nzt/crc-debug-wrhx8" Dec 10 12:56:26 crc kubenswrapper[4689]: I1210 12:56:26.150009 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tndsg\" (UniqueName: \"kubernetes.io/projected/4367c682-5462-4098-abf1-0cbb4ae8f28b-kube-api-access-tndsg\") pod \"crc-debug-wrhx8\" (UID: \"4367c682-5462-4098-abf1-0cbb4ae8f28b\") " pod="openshift-must-gather-w2nzt/crc-debug-wrhx8" Dec 10 12:56:26 crc kubenswrapper[4689]: I1210 12:56:26.252260 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4367c682-5462-4098-abf1-0cbb4ae8f28b-host\") pod \"crc-debug-wrhx8\" (UID: \"4367c682-5462-4098-abf1-0cbb4ae8f28b\") " pod="openshift-must-gather-w2nzt/crc-debug-wrhx8" Dec 10 12:56:26 crc kubenswrapper[4689]: I1210 12:56:26.252369 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tndsg\" (UniqueName: \"kubernetes.io/projected/4367c682-5462-4098-abf1-0cbb4ae8f28b-kube-api-access-tndsg\") pod \"crc-debug-wrhx8\" (UID: \"4367c682-5462-4098-abf1-0cbb4ae8f28b\") " pod="openshift-must-gather-w2nzt/crc-debug-wrhx8" Dec 10 12:56:26 crc kubenswrapper[4689]: I1210 12:56:26.252455 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4367c682-5462-4098-abf1-0cbb4ae8f28b-host\") pod \"crc-debug-wrhx8\" (UID: \"4367c682-5462-4098-abf1-0cbb4ae8f28b\") " pod="openshift-must-gather-w2nzt/crc-debug-wrhx8" Dec 10 12:56:26 crc kubenswrapper[4689]: I1210 12:56:26.285774 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tndsg\" (UniqueName: \"kubernetes.io/projected/4367c682-5462-4098-abf1-0cbb4ae8f28b-kube-api-access-tndsg\") pod \"crc-debug-wrhx8\" (UID: \"4367c682-5462-4098-abf1-0cbb4ae8f28b\") " pod="openshift-must-gather-w2nzt/crc-debug-wrhx8" Dec 10 12:56:26 crc kubenswrapper[4689]: I1210 12:56:26.345288 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2nzt/crc-debug-wrhx8" Dec 10 12:56:26 crc kubenswrapper[4689]: W1210 12:56:26.377542 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4367c682_5462_4098_abf1_0cbb4ae8f28b.slice/crio-31a95f9faec65f029996c797be465b8f79839f09a3a5d986e81177605ef1f2fa WatchSource:0}: Error finding container 31a95f9faec65f029996c797be465b8f79839f09a3a5d986e81177605ef1f2fa: Status 404 returned error can't find the container with id 31a95f9faec65f029996c797be465b8f79839f09a3a5d986e81177605ef1f2fa Dec 10 12:56:27 crc kubenswrapper[4689]: I1210 12:56:27.379890 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w2nzt/crc-debug-wrhx8" event={"ID":"4367c682-5462-4098-abf1-0cbb4ae8f28b","Type":"ContainerStarted","Data":"eba2d208d87b29797d28a0a4d1b9c2a66de577d6c22b16eccf829c2098470594"} Dec 10 12:56:27 crc kubenswrapper[4689]: I1210 12:56:27.380390 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w2nzt/crc-debug-wrhx8" event={"ID":"4367c682-5462-4098-abf1-0cbb4ae8f28b","Type":"ContainerStarted","Data":"31a95f9faec65f029996c797be465b8f79839f09a3a5d986e81177605ef1f2fa"} Dec 10 12:56:27 crc kubenswrapper[4689]: I1210 12:56:27.392360 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-w2nzt/crc-debug-wrhx8" podStartSLOduration=1.392345054 podStartE2EDuration="1.392345054s" podCreationTimestamp="2025-12-10 12:56:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 12:56:27.391056892 +0000 UTC m=+2455.179138030" watchObservedRunningTime="2025-12-10 12:56:27.392345054 +0000 UTC m=+2455.180426192" Dec 10 12:56:28 crc kubenswrapper[4689]: I1210 12:56:28.497940 4689 scope.go:117] "RemoveContainer" containerID="ce6f9dfd50adadcec59ac06d9a711336ee8675f152d7050ec5daf10842869d64" Dec 10 12:56:28 crc kubenswrapper[4689]: E1210 12:56:28.498468 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:56:39 crc kubenswrapper[4689]: I1210 12:56:39.498860 4689 scope.go:117] "RemoveContainer" containerID="ce6f9dfd50adadcec59ac06d9a711336ee8675f152d7050ec5daf10842869d64" Dec 10 12:56:39 crc kubenswrapper[4689]: E1210 12:56:39.499683 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:56:52 crc kubenswrapper[4689]: I1210 12:56:52.498029 4689 scope.go:117] "RemoveContainer" containerID="ce6f9dfd50adadcec59ac06d9a711336ee8675f152d7050ec5daf10842869d64" Dec 10 12:56:52 crc kubenswrapper[4689]: E1210 12:56:52.498720 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:56:59 crc kubenswrapper[4689]: I1210 12:56:59.664378 4689 generic.go:334] "Generic (PLEG): container finished" podID="4367c682-5462-4098-abf1-0cbb4ae8f28b" containerID="eba2d208d87b29797d28a0a4d1b9c2a66de577d6c22b16eccf829c2098470594" exitCode=0 Dec 10 12:56:59 crc kubenswrapper[4689]: I1210 12:56:59.664463 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w2nzt/crc-debug-wrhx8" event={"ID":"4367c682-5462-4098-abf1-0cbb4ae8f28b","Type":"ContainerDied","Data":"eba2d208d87b29797d28a0a4d1b9c2a66de577d6c22b16eccf829c2098470594"} Dec 10 12:57:00 crc kubenswrapper[4689]: I1210 12:57:00.782631 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2nzt/crc-debug-wrhx8" Dec 10 12:57:00 crc kubenswrapper[4689]: I1210 12:57:00.815527 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-w2nzt/crc-debug-wrhx8"] Dec 10 12:57:00 crc kubenswrapper[4689]: I1210 12:57:00.822579 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-w2nzt/crc-debug-wrhx8"] Dec 10 12:57:00 crc kubenswrapper[4689]: I1210 12:57:00.888622 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tndsg\" (UniqueName: \"kubernetes.io/projected/4367c682-5462-4098-abf1-0cbb4ae8f28b-kube-api-access-tndsg\") pod \"4367c682-5462-4098-abf1-0cbb4ae8f28b\" (UID: \"4367c682-5462-4098-abf1-0cbb4ae8f28b\") " Dec 10 12:57:00 crc kubenswrapper[4689]: I1210 12:57:00.888713 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4367c682-5462-4098-abf1-0cbb4ae8f28b-host\") pod \"4367c682-5462-4098-abf1-0cbb4ae8f28b\" (UID: \"4367c682-5462-4098-abf1-0cbb4ae8f28b\") " Dec 10 12:57:00 crc kubenswrapper[4689]: I1210 12:57:00.888941 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4367c682-5462-4098-abf1-0cbb4ae8f28b-host" (OuterVolumeSpecName: "host") pod "4367c682-5462-4098-abf1-0cbb4ae8f28b" (UID: "4367c682-5462-4098-abf1-0cbb4ae8f28b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:57:00 crc kubenswrapper[4689]: I1210 12:57:00.889466 4689 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4367c682-5462-4098-abf1-0cbb4ae8f28b-host\") on node \"crc\" DevicePath \"\"" Dec 10 12:57:00 crc kubenswrapper[4689]: I1210 12:57:00.903187 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4367c682-5462-4098-abf1-0cbb4ae8f28b-kube-api-access-tndsg" (OuterVolumeSpecName: "kube-api-access-tndsg") pod "4367c682-5462-4098-abf1-0cbb4ae8f28b" (UID: "4367c682-5462-4098-abf1-0cbb4ae8f28b"). InnerVolumeSpecName "kube-api-access-tndsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:57:00 crc kubenswrapper[4689]: I1210 12:57:00.991649 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tndsg\" (UniqueName: \"kubernetes.io/projected/4367c682-5462-4098-abf1-0cbb4ae8f28b-kube-api-access-tndsg\") on node \"crc\" DevicePath \"\"" Dec 10 12:57:01 crc kubenswrapper[4689]: I1210 12:57:01.683609 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31a95f9faec65f029996c797be465b8f79839f09a3a5d986e81177605ef1f2fa" Dec 10 12:57:01 crc kubenswrapper[4689]: I1210 12:57:01.683694 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2nzt/crc-debug-wrhx8" Dec 10 12:57:02 crc kubenswrapper[4689]: I1210 12:57:02.001182 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-w2nzt/crc-debug-t8xww"] Dec 10 12:57:02 crc kubenswrapper[4689]: E1210 12:57:02.001573 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4367c682-5462-4098-abf1-0cbb4ae8f28b" containerName="container-00" Dec 10 12:57:02 crc kubenswrapper[4689]: I1210 12:57:02.001585 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="4367c682-5462-4098-abf1-0cbb4ae8f28b" containerName="container-00" Dec 10 12:57:02 crc kubenswrapper[4689]: I1210 12:57:02.001784 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="4367c682-5462-4098-abf1-0cbb4ae8f28b" containerName="container-00" Dec 10 12:57:02 crc kubenswrapper[4689]: I1210 12:57:02.002555 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2nzt/crc-debug-t8xww" Dec 10 12:57:02 crc kubenswrapper[4689]: I1210 12:57:02.115393 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz5rr\" (UniqueName: \"kubernetes.io/projected/10430925-1479-4811-a160-86a0ac30ebc1-kube-api-access-cz5rr\") pod \"crc-debug-t8xww\" (UID: \"10430925-1479-4811-a160-86a0ac30ebc1\") " pod="openshift-must-gather-w2nzt/crc-debug-t8xww" Dec 10 12:57:02 crc kubenswrapper[4689]: I1210 12:57:02.115551 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/10430925-1479-4811-a160-86a0ac30ebc1-host\") pod \"crc-debug-t8xww\" (UID: \"10430925-1479-4811-a160-86a0ac30ebc1\") " pod="openshift-must-gather-w2nzt/crc-debug-t8xww" Dec 10 12:57:02 crc kubenswrapper[4689]: I1210 12:57:02.217314 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz5rr\" (UniqueName: \"kubernetes.io/projected/10430925-1479-4811-a160-86a0ac30ebc1-kube-api-access-cz5rr\") pod \"crc-debug-t8xww\" (UID: \"10430925-1479-4811-a160-86a0ac30ebc1\") " pod="openshift-must-gather-w2nzt/crc-debug-t8xww" Dec 10 12:57:02 crc kubenswrapper[4689]: I1210 12:57:02.217620 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/10430925-1479-4811-a160-86a0ac30ebc1-host\") pod \"crc-debug-t8xww\" (UID: \"10430925-1479-4811-a160-86a0ac30ebc1\") " pod="openshift-must-gather-w2nzt/crc-debug-t8xww" Dec 10 12:57:02 crc kubenswrapper[4689]: I1210 12:57:02.217727 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/10430925-1479-4811-a160-86a0ac30ebc1-host\") pod \"crc-debug-t8xww\" (UID: \"10430925-1479-4811-a160-86a0ac30ebc1\") " pod="openshift-must-gather-w2nzt/crc-debug-t8xww" Dec 10 12:57:02 crc kubenswrapper[4689]: I1210 12:57:02.235723 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz5rr\" (UniqueName: \"kubernetes.io/projected/10430925-1479-4811-a160-86a0ac30ebc1-kube-api-access-cz5rr\") pod \"crc-debug-t8xww\" (UID: \"10430925-1479-4811-a160-86a0ac30ebc1\") " pod="openshift-must-gather-w2nzt/crc-debug-t8xww" Dec 10 12:57:02 crc kubenswrapper[4689]: I1210 12:57:02.320023 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2nzt/crc-debug-t8xww" Dec 10 12:57:02 crc kubenswrapper[4689]: I1210 12:57:02.509616 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4367c682-5462-4098-abf1-0cbb4ae8f28b" path="/var/lib/kubelet/pods/4367c682-5462-4098-abf1-0cbb4ae8f28b/volumes" Dec 10 12:57:02 crc kubenswrapper[4689]: I1210 12:57:02.692751 4689 generic.go:334] "Generic (PLEG): container finished" podID="10430925-1479-4811-a160-86a0ac30ebc1" containerID="681ec276ce0df5f18220e3a6bb45b66f4da782820bc04f0bf8f310694a77f1a3" exitCode=0 Dec 10 12:57:02 crc kubenswrapper[4689]: I1210 12:57:02.692834 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w2nzt/crc-debug-t8xww" event={"ID":"10430925-1479-4811-a160-86a0ac30ebc1","Type":"ContainerDied","Data":"681ec276ce0df5f18220e3a6bb45b66f4da782820bc04f0bf8f310694a77f1a3"} Dec 10 12:57:02 crc kubenswrapper[4689]: I1210 12:57:02.693344 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w2nzt/crc-debug-t8xww" event={"ID":"10430925-1479-4811-a160-86a0ac30ebc1","Type":"ContainerStarted","Data":"ff234a65dc3649e7738ef96f97853483c6e419130ac4d9e5c37cbd524081cca8"} Dec 10 12:57:03 crc kubenswrapper[4689]: I1210 12:57:03.130804 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-w2nzt/crc-debug-t8xww"] Dec 10 12:57:03 crc kubenswrapper[4689]: I1210 12:57:03.137076 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-w2nzt/crc-debug-t8xww"] Dec 10 12:57:03 crc kubenswrapper[4689]: I1210 12:57:03.813176 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2nzt/crc-debug-t8xww" Dec 10 12:57:03 crc kubenswrapper[4689]: I1210 12:57:03.949583 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/10430925-1479-4811-a160-86a0ac30ebc1-host\") pod \"10430925-1479-4811-a160-86a0ac30ebc1\" (UID: \"10430925-1479-4811-a160-86a0ac30ebc1\") " Dec 10 12:57:03 crc kubenswrapper[4689]: I1210 12:57:03.949684 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz5rr\" (UniqueName: \"kubernetes.io/projected/10430925-1479-4811-a160-86a0ac30ebc1-kube-api-access-cz5rr\") pod \"10430925-1479-4811-a160-86a0ac30ebc1\" (UID: \"10430925-1479-4811-a160-86a0ac30ebc1\") " Dec 10 12:57:03 crc kubenswrapper[4689]: I1210 12:57:03.949692 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10430925-1479-4811-a160-86a0ac30ebc1-host" (OuterVolumeSpecName: "host") pod "10430925-1479-4811-a160-86a0ac30ebc1" (UID: "10430925-1479-4811-a160-86a0ac30ebc1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:57:03 crc kubenswrapper[4689]: I1210 12:57:03.950315 4689 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/10430925-1479-4811-a160-86a0ac30ebc1-host\") on node \"crc\" DevicePath \"\"" Dec 10 12:57:03 crc kubenswrapper[4689]: I1210 12:57:03.958431 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10430925-1479-4811-a160-86a0ac30ebc1-kube-api-access-cz5rr" (OuterVolumeSpecName: "kube-api-access-cz5rr") pod "10430925-1479-4811-a160-86a0ac30ebc1" (UID: "10430925-1479-4811-a160-86a0ac30ebc1"). InnerVolumeSpecName "kube-api-access-cz5rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:57:04 crc kubenswrapper[4689]: I1210 12:57:04.052542 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz5rr\" (UniqueName: \"kubernetes.io/projected/10430925-1479-4811-a160-86a0ac30ebc1-kube-api-access-cz5rr\") on node \"crc\" DevicePath \"\"" Dec 10 12:57:04 crc kubenswrapper[4689]: I1210 12:57:04.313866 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-w2nzt/crc-debug-ltngs"] Dec 10 12:57:04 crc kubenswrapper[4689]: E1210 12:57:04.314309 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10430925-1479-4811-a160-86a0ac30ebc1" containerName="container-00" Dec 10 12:57:04 crc kubenswrapper[4689]: I1210 12:57:04.314322 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="10430925-1479-4811-a160-86a0ac30ebc1" containerName="container-00" Dec 10 12:57:04 crc kubenswrapper[4689]: I1210 12:57:04.314528 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="10430925-1479-4811-a160-86a0ac30ebc1" containerName="container-00" Dec 10 12:57:04 crc kubenswrapper[4689]: I1210 12:57:04.315270 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2nzt/crc-debug-ltngs" Dec 10 12:57:04 crc kubenswrapper[4689]: I1210 12:57:04.460069 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/07a75a15-9c46-4004-9b19-d0e0d885f26d-host\") pod \"crc-debug-ltngs\" (UID: \"07a75a15-9c46-4004-9b19-d0e0d885f26d\") " pod="openshift-must-gather-w2nzt/crc-debug-ltngs" Dec 10 12:57:04 crc kubenswrapper[4689]: I1210 12:57:04.460498 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm668\" (UniqueName: \"kubernetes.io/projected/07a75a15-9c46-4004-9b19-d0e0d885f26d-kube-api-access-vm668\") pod \"crc-debug-ltngs\" (UID: \"07a75a15-9c46-4004-9b19-d0e0d885f26d\") " pod="openshift-must-gather-w2nzt/crc-debug-ltngs" Dec 10 12:57:04 crc kubenswrapper[4689]: I1210 12:57:04.508056 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10430925-1479-4811-a160-86a0ac30ebc1" path="/var/lib/kubelet/pods/10430925-1479-4811-a160-86a0ac30ebc1/volumes" Dec 10 12:57:04 crc kubenswrapper[4689]: I1210 12:57:04.562196 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm668\" (UniqueName: \"kubernetes.io/projected/07a75a15-9c46-4004-9b19-d0e0d885f26d-kube-api-access-vm668\") pod \"crc-debug-ltngs\" (UID: \"07a75a15-9c46-4004-9b19-d0e0d885f26d\") " pod="openshift-must-gather-w2nzt/crc-debug-ltngs" Dec 10 12:57:04 crc kubenswrapper[4689]: I1210 12:57:04.562307 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/07a75a15-9c46-4004-9b19-d0e0d885f26d-host\") pod \"crc-debug-ltngs\" (UID: \"07a75a15-9c46-4004-9b19-d0e0d885f26d\") " pod="openshift-must-gather-w2nzt/crc-debug-ltngs" Dec 10 12:57:04 crc kubenswrapper[4689]: I1210 12:57:04.562469 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/07a75a15-9c46-4004-9b19-d0e0d885f26d-host\") pod \"crc-debug-ltngs\" (UID: \"07a75a15-9c46-4004-9b19-d0e0d885f26d\") " pod="openshift-must-gather-w2nzt/crc-debug-ltngs" Dec 10 12:57:04 crc kubenswrapper[4689]: I1210 12:57:04.582358 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm668\" (UniqueName: \"kubernetes.io/projected/07a75a15-9c46-4004-9b19-d0e0d885f26d-kube-api-access-vm668\") pod \"crc-debug-ltngs\" (UID: \"07a75a15-9c46-4004-9b19-d0e0d885f26d\") " pod="openshift-must-gather-w2nzt/crc-debug-ltngs" Dec 10 12:57:04 crc kubenswrapper[4689]: I1210 12:57:04.632435 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2nzt/crc-debug-ltngs" Dec 10 12:57:04 crc kubenswrapper[4689]: W1210 12:57:04.668413 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07a75a15_9c46_4004_9b19_d0e0d885f26d.slice/crio-891f48257a317c1d3031c0b41aca5ba5a70c87b7f468b35bb351bb13f1a9c538 WatchSource:0}: Error finding container 891f48257a317c1d3031c0b41aca5ba5a70c87b7f468b35bb351bb13f1a9c538: Status 404 returned error can't find the container with id 891f48257a317c1d3031c0b41aca5ba5a70c87b7f468b35bb351bb13f1a9c538 Dec 10 12:57:04 crc kubenswrapper[4689]: I1210 12:57:04.709736 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2nzt/crc-debug-t8xww" Dec 10 12:57:04 crc kubenswrapper[4689]: I1210 12:57:04.709755 4689 scope.go:117] "RemoveContainer" containerID="681ec276ce0df5f18220e3a6bb45b66f4da782820bc04f0bf8f310694a77f1a3" Dec 10 12:57:04 crc kubenswrapper[4689]: I1210 12:57:04.713402 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w2nzt/crc-debug-ltngs" event={"ID":"07a75a15-9c46-4004-9b19-d0e0d885f26d","Type":"ContainerStarted","Data":"891f48257a317c1d3031c0b41aca5ba5a70c87b7f468b35bb351bb13f1a9c538"} Dec 10 12:57:05 crc kubenswrapper[4689]: I1210 12:57:05.725761 4689 generic.go:334] "Generic (PLEG): container finished" podID="07a75a15-9c46-4004-9b19-d0e0d885f26d" containerID="e1af888423e86094103ae6aa3a90a2c7c64a97ba714cd0d39e7e6a50c7785004" exitCode=0 Dec 10 12:57:05 crc kubenswrapper[4689]: I1210 12:57:05.725849 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w2nzt/crc-debug-ltngs" event={"ID":"07a75a15-9c46-4004-9b19-d0e0d885f26d","Type":"ContainerDied","Data":"e1af888423e86094103ae6aa3a90a2c7c64a97ba714cd0d39e7e6a50c7785004"} Dec 10 12:57:05 crc kubenswrapper[4689]: I1210 12:57:05.772563 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-w2nzt/crc-debug-ltngs"] Dec 10 12:57:05 crc kubenswrapper[4689]: I1210 12:57:05.781221 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-w2nzt/crc-debug-ltngs"] Dec 10 12:57:06 crc kubenswrapper[4689]: I1210 12:57:06.833122 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2nzt/crc-debug-ltngs" Dec 10 12:57:06 crc kubenswrapper[4689]: I1210 12:57:06.905161 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm668\" (UniqueName: \"kubernetes.io/projected/07a75a15-9c46-4004-9b19-d0e0d885f26d-kube-api-access-vm668\") pod \"07a75a15-9c46-4004-9b19-d0e0d885f26d\" (UID: \"07a75a15-9c46-4004-9b19-d0e0d885f26d\") " Dec 10 12:57:06 crc kubenswrapper[4689]: I1210 12:57:06.905364 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/07a75a15-9c46-4004-9b19-d0e0d885f26d-host\") pod \"07a75a15-9c46-4004-9b19-d0e0d885f26d\" (UID: \"07a75a15-9c46-4004-9b19-d0e0d885f26d\") " Dec 10 12:57:06 crc kubenswrapper[4689]: I1210 12:57:06.905647 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07a75a15-9c46-4004-9b19-d0e0d885f26d-host" (OuterVolumeSpecName: "host") pod "07a75a15-9c46-4004-9b19-d0e0d885f26d" (UID: "07a75a15-9c46-4004-9b19-d0e0d885f26d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 12:57:06 crc kubenswrapper[4689]: I1210 12:57:06.905958 4689 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/07a75a15-9c46-4004-9b19-d0e0d885f26d-host\") on node \"crc\" DevicePath \"\"" Dec 10 12:57:06 crc kubenswrapper[4689]: I1210 12:57:06.919250 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07a75a15-9c46-4004-9b19-d0e0d885f26d-kube-api-access-vm668" (OuterVolumeSpecName: "kube-api-access-vm668") pod "07a75a15-9c46-4004-9b19-d0e0d885f26d" (UID: "07a75a15-9c46-4004-9b19-d0e0d885f26d"). InnerVolumeSpecName "kube-api-access-vm668". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 12:57:07 crc kubenswrapper[4689]: I1210 12:57:07.008394 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm668\" (UniqueName: \"kubernetes.io/projected/07a75a15-9c46-4004-9b19-d0e0d885f26d-kube-api-access-vm668\") on node \"crc\" DevicePath \"\"" Dec 10 12:57:07 crc kubenswrapper[4689]: I1210 12:57:07.498104 4689 scope.go:117] "RemoveContainer" containerID="ce6f9dfd50adadcec59ac06d9a711336ee8675f152d7050ec5daf10842869d64" Dec 10 12:57:07 crc kubenswrapper[4689]: E1210 12:57:07.498352 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:57:07 crc kubenswrapper[4689]: I1210 12:57:07.743710 4689 scope.go:117] "RemoveContainer" containerID="e1af888423e86094103ae6aa3a90a2c7c64a97ba714cd0d39e7e6a50c7785004" Dec 10 12:57:07 crc kubenswrapper[4689]: I1210 12:57:07.743740 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2nzt/crc-debug-ltngs" Dec 10 12:57:08 crc kubenswrapper[4689]: I1210 12:57:08.508211 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07a75a15-9c46-4004-9b19-d0e0d885f26d" path="/var/lib/kubelet/pods/07a75a15-9c46-4004-9b19-d0e0d885f26d/volumes" Dec 10 12:57:20 crc kubenswrapper[4689]: I1210 12:57:20.498729 4689 scope.go:117] "RemoveContainer" containerID="ce6f9dfd50adadcec59ac06d9a711336ee8675f152d7050ec5daf10842869d64" Dec 10 12:57:20 crc kubenswrapper[4689]: E1210 12:57:20.499428 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:57:24 crc kubenswrapper[4689]: I1210 12:57:24.278403 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-74c576f5cb-ljzfq_159bb08c-9220-4d78-9b24-4b8293139a23/barbican-api/0.log" Dec 10 12:57:24 crc kubenswrapper[4689]: I1210 12:57:24.372873 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-74c576f5cb-ljzfq_159bb08c-9220-4d78-9b24-4b8293139a23/barbican-api-log/0.log" Dec 10 12:57:24 crc kubenswrapper[4689]: I1210 12:57:24.440059 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7869fbbf6d-5mmw9_0ecdef57-40ee-46b4-a739-3f8fd2354018/barbican-keystone-listener/0.log" Dec 10 12:57:24 crc kubenswrapper[4689]: I1210 12:57:24.569773 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7869fbbf6d-5mmw9_0ecdef57-40ee-46b4-a739-3f8fd2354018/barbican-keystone-listener-log/0.log" Dec 10 12:57:24 crc kubenswrapper[4689]: I1210 12:57:24.643402 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6b8cb45d75-89dpw_5a9faa39-da67-436d-884a-06d93286633e/barbican-worker/0.log" Dec 10 12:57:24 crc kubenswrapper[4689]: I1210 12:57:24.672258 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6b8cb45d75-89dpw_5a9faa39-da67-436d-884a-06d93286633e/barbican-worker-log/0.log" Dec 10 12:57:24 crc kubenswrapper[4689]: I1210 12:57:24.806530 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b9c7f257-7dbf-4a84-8b3f-060db6f93454/ceilometer-central-agent/0.log" Dec 10 12:57:24 crc kubenswrapper[4689]: I1210 12:57:24.854032 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b9c7f257-7dbf-4a84-8b3f-060db6f93454/proxy-httpd/0.log" Dec 10 12:57:24 crc kubenswrapper[4689]: I1210 12:57:24.870031 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b9c7f257-7dbf-4a84-8b3f-060db6f93454/ceilometer-notification-agent/0.log" Dec 10 12:57:25 crc kubenswrapper[4689]: I1210 12:57:25.033185 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3033a7b9-9374-47c4-89a2-188204ccd941/cinder-api-log/0.log" Dec 10 12:57:25 crc kubenswrapper[4689]: I1210 12:57:25.035553 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b9c7f257-7dbf-4a84-8b3f-060db6f93454/sg-core/0.log" Dec 10 12:57:25 crc kubenswrapper[4689]: I1210 12:57:25.079260 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3033a7b9-9374-47c4-89a2-188204ccd941/cinder-api/0.log" Dec 10 12:57:25 crc kubenswrapper[4689]: I1210 12:57:25.227299 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_8e8f74b8-b74a-42eb-97ec-28680f9999a4/probe/0.log" Dec 10 12:57:25 crc kubenswrapper[4689]: I1210 12:57:25.282549 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_8e8f74b8-b74a-42eb-97ec-28680f9999a4/cinder-scheduler/0.log" Dec 10 12:57:25 crc kubenswrapper[4689]: I1210 12:57:25.441040 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-59cf4bdb65-m75sl_e1683950-c036-44c9-9ad3-5e91fee6c3ba/init/0.log" Dec 10 12:57:25 crc kubenswrapper[4689]: I1210 12:57:25.568138 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-59cf4bdb65-m75sl_e1683950-c036-44c9-9ad3-5e91fee6c3ba/init/0.log" Dec 10 12:57:25 crc kubenswrapper[4689]: I1210 12:57:25.632600 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ce240e41-0473-47c3-8349-854caa2baad2/glance-httpd/0.log" Dec 10 12:57:25 crc kubenswrapper[4689]: I1210 12:57:25.639083 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-59cf4bdb65-m75sl_e1683950-c036-44c9-9ad3-5e91fee6c3ba/dnsmasq-dns/0.log" Dec 10 12:57:25 crc kubenswrapper[4689]: I1210 12:57:25.751240 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ce240e41-0473-47c3-8349-854caa2baad2/glance-log/0.log" Dec 10 12:57:25 crc kubenswrapper[4689]: I1210 12:57:25.805056 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_3783c348-e04a-4246-ae21-d47d6bae3467/glance-httpd/0.log" Dec 10 12:57:25 crc kubenswrapper[4689]: I1210 12:57:25.876551 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_3783c348-e04a-4246-ae21-d47d6bae3467/glance-log/0.log" Dec 10 12:57:26 crc kubenswrapper[4689]: I1210 12:57:26.013440 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_23e46f1d-5919-4baa-aeef-1364104b63fb/init/0.log" Dec 10 12:57:26 crc kubenswrapper[4689]: I1210 12:57:26.184747 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_23e46f1d-5919-4baa-aeef-1364104b63fb/ironic-python-agent-init/0.log" Dec 10 12:57:26 crc kubenswrapper[4689]: I1210 12:57:26.227781 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_23e46f1d-5919-4baa-aeef-1364104b63fb/init/0.log" Dec 10 12:57:26 crc kubenswrapper[4689]: I1210 12:57:26.230128 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_23e46f1d-5919-4baa-aeef-1364104b63fb/ironic-python-agent-init/0.log" Dec 10 12:57:26 crc kubenswrapper[4689]: I1210 12:57:26.482549 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_23e46f1d-5919-4baa-aeef-1364104b63fb/ironic-python-agent-init/0.log" Dec 10 12:57:26 crc kubenswrapper[4689]: I1210 12:57:26.498496 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_23e46f1d-5919-4baa-aeef-1364104b63fb/init/0.log" Dec 10 12:57:26 crc kubenswrapper[4689]: I1210 12:57:26.864387 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_23e46f1d-5919-4baa-aeef-1364104b63fb/init/0.log" Dec 10 12:57:27 crc kubenswrapper[4689]: I1210 12:57:27.087586 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_23e46f1d-5919-4baa-aeef-1364104b63fb/ironic-python-agent-init/0.log" Dec 10 12:57:27 crc kubenswrapper[4689]: I1210 12:57:27.300662 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_23e46f1d-5919-4baa-aeef-1364104b63fb/pxe-init/0.log" Dec 10 12:57:27 crc kubenswrapper[4689]: I1210 12:57:27.496489 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_23e46f1d-5919-4baa-aeef-1364104b63fb/httpboot/0.log" Dec 10 12:57:27 crc kubenswrapper[4689]: I1210 12:57:27.653511 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_23e46f1d-5919-4baa-aeef-1364104b63fb/pxe-init/0.log" Dec 10 12:57:27 crc kubenswrapper[4689]: I1210 12:57:27.763014 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_23e46f1d-5919-4baa-aeef-1364104b63fb/ironic-conductor/0.log" Dec 10 12:57:27 crc kubenswrapper[4689]: I1210 12:57:27.830489 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_23e46f1d-5919-4baa-aeef-1364104b63fb/pxe-init/0.log" Dec 10 12:57:27 crc kubenswrapper[4689]: I1210 12:57:27.869074 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_23e46f1d-5919-4baa-aeef-1364104b63fb/ramdisk-logs/0.log" Dec 10 12:57:28 crc kubenswrapper[4689]: I1210 12:57:28.101916 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-jz2g4_574d5244-06f3-49f9-b8b8-93bd57d4fc35/init/0.log" Dec 10 12:57:28 crc kubenswrapper[4689]: I1210 12:57:28.206222 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_23e46f1d-5919-4baa-aeef-1364104b63fb/pxe-init/0.log" Dec 10 12:57:28 crc kubenswrapper[4689]: I1210 12:57:28.250945 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-jz2g4_574d5244-06f3-49f9-b8b8-93bd57d4fc35/init/0.log" Dec 10 12:57:28 crc kubenswrapper[4689]: I1210 12:57:28.261746 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-jz2g4_574d5244-06f3-49f9-b8b8-93bd57d4fc35/ironic-db-sync/0.log" Dec 10 12:57:28 crc kubenswrapper[4689]: I1210 12:57:28.272715 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-dbcd8ff8b-j52fs_8c984e40-d3ee-426e-ab51-c576bc699e11/init/0.log" Dec 10 12:57:28 crc kubenswrapper[4689]: I1210 12:57:28.492425 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-dbcd8ff8b-j52fs_8c984e40-d3ee-426e-ab51-c576bc699e11/ironic-api-log/0.log" Dec 10 12:57:28 crc kubenswrapper[4689]: I1210 12:57:28.509963 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-dbcd8ff8b-j52fs_8c984e40-d3ee-426e-ab51-c576bc699e11/init/0.log" Dec 10 12:57:28 crc kubenswrapper[4689]: I1210 12:57:28.592422 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_5a770426-b384-4fc7-acc0-fa42ff536a9b/ironic-python-agent-init/0.log" Dec 10 12:57:28 crc kubenswrapper[4689]: I1210 12:57:28.592957 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-dbcd8ff8b-j52fs_8c984e40-d3ee-426e-ab51-c576bc699e11/ironic-api/0.log" Dec 10 12:57:28 crc kubenswrapper[4689]: I1210 12:57:28.714130 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_5a770426-b384-4fc7-acc0-fa42ff536a9b/inspector-pxe-init/0.log" Dec 10 12:57:28 crc kubenswrapper[4689]: I1210 12:57:28.729320 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_5a770426-b384-4fc7-acc0-fa42ff536a9b/ironic-python-agent-init/0.log" Dec 10 12:57:28 crc kubenswrapper[4689]: I1210 12:57:28.734988 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_5a770426-b384-4fc7-acc0-fa42ff536a9b/inspector-pxe-init/0.log" Dec 10 12:57:28 crc kubenswrapper[4689]: I1210 12:57:28.922798 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_5a770426-b384-4fc7-acc0-fa42ff536a9b/ironic-python-agent-init/0.log" Dec 10 12:57:28 crc kubenswrapper[4689]: I1210 12:57:28.929501 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_5a770426-b384-4fc7-acc0-fa42ff536a9b/inspector-pxe-init/0.log" Dec 10 12:57:28 crc kubenswrapper[4689]: I1210 12:57:28.934671 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_5a770426-b384-4fc7-acc0-fa42ff536a9b/inspector-httpboot/0.log" Dec 10 12:57:28 crc kubenswrapper[4689]: I1210 12:57:28.989846 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_5a770426-b384-4fc7-acc0-fa42ff536a9b/ironic-inspector/2.log" Dec 10 12:57:28 crc kubenswrapper[4689]: I1210 12:57:28.991122 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_5a770426-b384-4fc7-acc0-fa42ff536a9b/ironic-inspector/1.log" Dec 10 12:57:29 crc kubenswrapper[4689]: I1210 12:57:29.142035 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_5a770426-b384-4fc7-acc0-fa42ff536a9b/ironic-inspector-httpd/0.log" Dec 10 12:57:29 crc kubenswrapper[4689]: I1210 12:57:29.165118 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_5a770426-b384-4fc7-acc0-fa42ff536a9b/ramdisk-logs/0.log" Dec 10 12:57:29 crc kubenswrapper[4689]: I1210 12:57:29.198894 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-db-sync-9zldg_0ebb276c-ffb4-490e-bf4b-c55c0c49aa43/ironic-inspector-db-sync/0.log" Dec 10 12:57:29 crc kubenswrapper[4689]: I1210 12:57:29.341903 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-neutron-agent-6f4566d7bf-hkx2g_234f8267-1974-4f9e-9d13-8a239ff2660c/ironic-neutron-agent/1.log" Dec 10 12:57:29 crc kubenswrapper[4689]: I1210 12:57:29.342909 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-neutron-agent-6f4566d7bf-hkx2g_234f8267-1974-4f9e-9d13-8a239ff2660c/ironic-neutron-agent/2.log" Dec 10 12:57:29 crc kubenswrapper[4689]: I1210 12:57:29.532408 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_9e13ec23-1267-498e-9d74-fcfc8aee69b6/kube-state-metrics/0.log" Dec 10 12:57:29 crc kubenswrapper[4689]: I1210 12:57:29.565643 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-85bc68c5bb-jxqf7_5ab8f9bd-1d66-4142-afe8-1cfce8e5f736/keystone-api/0.log" Dec 10 12:57:29 crc kubenswrapper[4689]: I1210 12:57:29.833784 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-67dc569cfc-xmxfj_84537f57-e77b-4147-99e4-d22fa43780cb/neutron-httpd/0.log" Dec 10 12:57:29 crc kubenswrapper[4689]: I1210 12:57:29.948473 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-67dc569cfc-xmxfj_84537f57-e77b-4147-99e4-d22fa43780cb/neutron-api/0.log" Dec 10 12:57:30 crc kubenswrapper[4689]: I1210 12:57:30.165944 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f9ec0888-b017-49bb-b11e-feb543a1db7e/nova-api-log/0.log" Dec 10 12:57:30 crc kubenswrapper[4689]: I1210 12:57:30.274776 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f9ec0888-b017-49bb-b11e-feb543a1db7e/nova-api-api/0.log" Dec 10 12:57:30 crc kubenswrapper[4689]: I1210 12:57:30.383152 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_3e65d1ea-f92e-4a74-944d-f9fcc1f1ae34/nova-cell0-conductor-conductor/0.log" Dec 10 12:57:30 crc kubenswrapper[4689]: I1210 12:57:30.516645 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_ab709567-2151-49a7-b199-aeca8ee0ae19/nova-cell1-conductor-conductor/0.log" Dec 10 12:57:30 crc kubenswrapper[4689]: I1210 12:57:30.647182 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_c9d9a204-d8fb-4bb8-b864-14178e550382/nova-cell1-novncproxy-novncproxy/0.log" Dec 10 12:57:30 crc kubenswrapper[4689]: I1210 12:57:30.828230 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_11d4f9aa-dff6-4df0-9f6e-ead4097006a0/nova-metadata-log/0.log" Dec 10 12:57:31 crc kubenswrapper[4689]: I1210 12:57:31.066325 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_49777225-4829-4cb0-bdd3-3e29ee4f0518/mysql-bootstrap/0.log" Dec 10 12:57:31 crc kubenswrapper[4689]: I1210 12:57:31.124678 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_11566d27-371a-412a-a9f0-b147b642f173/nova-scheduler-scheduler/0.log" Dec 10 12:57:31 crc kubenswrapper[4689]: I1210 12:57:31.276832 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_49777225-4829-4cb0-bdd3-3e29ee4f0518/mysql-bootstrap/0.log" Dec 10 12:57:31 crc kubenswrapper[4689]: I1210 12:57:31.358295 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_49777225-4829-4cb0-bdd3-3e29ee4f0518/galera/0.log" Dec 10 12:57:31 crc kubenswrapper[4689]: I1210 12:57:31.383044 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_11d4f9aa-dff6-4df0-9f6e-ead4097006a0/nova-metadata-metadata/0.log" Dec 10 12:57:31 crc kubenswrapper[4689]: I1210 12:57:31.516547 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_5de970c2-b559-4b8a-86f2-85b07c2292b1/mysql-bootstrap/0.log" Dec 10 12:57:31 crc kubenswrapper[4689]: I1210 12:57:31.747562 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_5de970c2-b559-4b8a-86f2-85b07c2292b1/mysql-bootstrap/0.log" Dec 10 12:57:31 crc kubenswrapper[4689]: I1210 12:57:31.755504 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_9ba6aa57-fcd5-4e81-aeec-18115df06abb/openstackclient/0.log" Dec 10 12:57:31 crc kubenswrapper[4689]: I1210 12:57:31.777111 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_5de970c2-b559-4b8a-86f2-85b07c2292b1/galera/0.log" Dec 10 12:57:31 crc kubenswrapper[4689]: I1210 12:57:31.947234 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-jpztp_58cb894b-f745-4d93-8925-193c6ff871a6/ovn-controller/0.log" Dec 10 12:57:32 crc kubenswrapper[4689]: I1210 12:57:32.087347 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-q4rdb_ed1dfb51-596d-4779-9d07-37b566b30adf/openstack-network-exporter/0.log" Dec 10 12:57:32 crc kubenswrapper[4689]: I1210 12:57:32.241487 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-29j5j_a1966558-be4f-4607-a746-739845ac6c46/ovsdb-server-init/0.log" Dec 10 12:57:32 crc kubenswrapper[4689]: I1210 12:57:32.410744 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-29j5j_a1966558-be4f-4607-a746-739845ac6c46/ovsdb-server/0.log" Dec 10 12:57:32 crc kubenswrapper[4689]: I1210 12:57:32.417228 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-29j5j_a1966558-be4f-4607-a746-739845ac6c46/ovs-vswitchd/0.log" Dec 10 12:57:32 crc kubenswrapper[4689]: I1210 12:57:32.446870 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-29j5j_a1966558-be4f-4607-a746-739845ac6c46/ovsdb-server-init/0.log" Dec 10 12:57:32 crc kubenswrapper[4689]: I1210 12:57:32.790707 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1d70ed4a-58eb-456e-bd2a-9b3c199a94bf/ovn-northd/0.log" Dec 10 12:57:32 crc kubenswrapper[4689]: I1210 12:57:32.800700 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1d70ed4a-58eb-456e-bd2a-9b3c199a94bf/openstack-network-exporter/0.log" Dec 10 12:57:32 crc kubenswrapper[4689]: I1210 12:57:32.914356 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_9c534934-f8a1-4029-8135-b190d180128a/openstack-network-exporter/0.log" Dec 10 12:57:33 crc kubenswrapper[4689]: I1210 12:57:33.005248 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_9c534934-f8a1-4029-8135-b190d180128a/ovsdbserver-nb/0.log" Dec 10 12:57:33 crc kubenswrapper[4689]: I1210 12:57:33.050648 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c18b8eee-7b2e-494c-b4e4-7896aa80a1ed/openstack-network-exporter/0.log" Dec 10 12:57:33 crc kubenswrapper[4689]: I1210 12:57:33.133798 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c18b8eee-7b2e-494c-b4e4-7896aa80a1ed/ovsdbserver-sb/0.log" Dec 10 12:57:33 crc kubenswrapper[4689]: I1210 12:57:33.289931 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-b85ffb7d4-zq54p_6708e3ec-a080-4f8d-a9c5-2821ea678717/placement-log/0.log" Dec 10 12:57:33 crc kubenswrapper[4689]: I1210 12:57:33.319403 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-b85ffb7d4-zq54p_6708e3ec-a080-4f8d-a9c5-2821ea678717/placement-api/0.log" Dec 10 12:57:33 crc kubenswrapper[4689]: I1210 12:57:33.461487 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_400622a6-9529-4712-85a9-03f48e2b1819/setup-container/0.log" Dec 10 12:57:33 crc kubenswrapper[4689]: I1210 12:57:33.708225 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_148aec72-272a-46b6-a75f-46dc2b680101/setup-container/0.log" Dec 10 12:57:33 crc kubenswrapper[4689]: I1210 12:57:33.749753 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_400622a6-9529-4712-85a9-03f48e2b1819/rabbitmq/0.log" Dec 10 12:57:33 crc kubenswrapper[4689]: I1210 12:57:33.759282 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_400622a6-9529-4712-85a9-03f48e2b1819/setup-container/0.log" Dec 10 12:57:33 crc kubenswrapper[4689]: I1210 12:57:33.932731 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_148aec72-272a-46b6-a75f-46dc2b680101/rabbitmq/0.log" Dec 10 12:57:33 crc kubenswrapper[4689]: I1210 12:57:33.956452 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_148aec72-272a-46b6-a75f-46dc2b680101/setup-container/0.log" Dec 10 12:57:34 crc kubenswrapper[4689]: I1210 12:57:34.135270 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-799574f579-slqvw_7a378a34-ca1b-41cb-85d9-97d124f4f6dc/proxy-httpd/0.log" Dec 10 12:57:34 crc kubenswrapper[4689]: I1210 12:57:34.170899 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-799574f579-slqvw_7a378a34-ca1b-41cb-85d9-97d124f4f6dc/proxy-server/0.log" Dec 10 12:57:34 crc kubenswrapper[4689]: I1210 12:57:34.242232 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-2mlkq_a4b24dc3-c04f-46a0-a0c0-0240d3d767cd/swift-ring-rebalance/0.log" Dec 10 12:57:34 crc kubenswrapper[4689]: I1210 12:57:34.344734 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4b54476-e438-46d8-b234-c8f661f5c26f/account-auditor/0.log" Dec 10 12:57:34 crc kubenswrapper[4689]: I1210 12:57:34.424511 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4b54476-e438-46d8-b234-c8f661f5c26f/account-reaper/0.log" Dec 10 12:57:34 crc kubenswrapper[4689]: I1210 12:57:34.450757 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4b54476-e438-46d8-b234-c8f661f5c26f/account-replicator/0.log" Dec 10 12:57:34 crc kubenswrapper[4689]: I1210 12:57:34.587476 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4b54476-e438-46d8-b234-c8f661f5c26f/account-server/0.log" Dec 10 12:57:34 crc kubenswrapper[4689]: I1210 12:57:34.588059 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4b54476-e438-46d8-b234-c8f661f5c26f/container-auditor/0.log" Dec 10 12:57:34 crc kubenswrapper[4689]: I1210 12:57:34.632060 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4b54476-e438-46d8-b234-c8f661f5c26f/container-replicator/0.log" Dec 10 12:57:34 crc kubenswrapper[4689]: I1210 12:57:34.693839 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4b54476-e438-46d8-b234-c8f661f5c26f/container-server/0.log" Dec 10 12:57:34 crc kubenswrapper[4689]: I1210 12:57:34.758858 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4b54476-e438-46d8-b234-c8f661f5c26f/container-updater/0.log" Dec 10 12:57:34 crc kubenswrapper[4689]: I1210 12:57:34.795475 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4b54476-e438-46d8-b234-c8f661f5c26f/object-expirer/0.log" Dec 10 12:57:34 crc kubenswrapper[4689]: I1210 12:57:34.815561 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4b54476-e438-46d8-b234-c8f661f5c26f/object-auditor/0.log" Dec 10 12:57:34 crc kubenswrapper[4689]: I1210 12:57:34.905228 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4b54476-e438-46d8-b234-c8f661f5c26f/object-replicator/0.log" Dec 10 12:57:34 crc kubenswrapper[4689]: I1210 12:57:34.935654 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4b54476-e438-46d8-b234-c8f661f5c26f/object-server/0.log" Dec 10 12:57:35 crc kubenswrapper[4689]: I1210 12:57:35.010536 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4b54476-e438-46d8-b234-c8f661f5c26f/object-updater/0.log" Dec 10 12:57:35 crc kubenswrapper[4689]: I1210 12:57:35.044959 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4b54476-e438-46d8-b234-c8f661f5c26f/rsync/0.log" Dec 10 12:57:35 crc kubenswrapper[4689]: I1210 12:57:35.138781 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c4b54476-e438-46d8-b234-c8f661f5c26f/swift-recon-cron/0.log" Dec 10 12:57:35 crc kubenswrapper[4689]: I1210 12:57:35.497723 4689 scope.go:117] "RemoveContainer" containerID="ce6f9dfd50adadcec59ac06d9a711336ee8675f152d7050ec5daf10842869d64" Dec 10 12:57:35 crc kubenswrapper[4689]: E1210 12:57:35.498063 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 12:57:41 crc kubenswrapper[4689]: I1210 12:57:41.314302 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_e5d9caa2-f209-4a7f-a0d8-353aa111c264/memcached/0.log" Dec 10 12:57:46 crc kubenswrapper[4689]: I1210 12:57:46.345079 4689 scope.go:117] "RemoveContainer" containerID="424d21c7c5672b66d9597c7f39b33ee19042cd7c1269b362013f3648bdaf9a57" Dec 10 12:57:46 crc kubenswrapper[4689]: I1210 12:57:46.366993 4689 scope.go:117] "RemoveContainer" containerID="f9dcf1cf0201f7b4441827bd0bbb80100dd938efcd6031957e3545e37e698d38" Dec 10 12:57:46 crc kubenswrapper[4689]: I1210 12:57:46.427879 4689 scope.go:117] "RemoveContainer" containerID="da5b94d19ba260b791a1392165eac75ee4849bad8c873b3745b127fea7f8a08d" Dec 10 12:57:48 crc kubenswrapper[4689]: I1210 12:57:48.498283 4689 scope.go:117] "RemoveContainer" containerID="ce6f9dfd50adadcec59ac06d9a711336ee8675f152d7050ec5daf10842869d64" Dec 10 12:57:49 crc kubenswrapper[4689]: I1210 12:57:49.110193 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" event={"ID":"a41ebdcd-910f-4669-992d-296e1a92162b","Type":"ContainerStarted","Data":"601abc48853dbe73a4d877b8a15f4ad5bda2a51a5faa8c0ccd6d9050ac250293"} Dec 10 12:57:58 crc kubenswrapper[4689]: I1210 12:57:58.135190 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45_c5985c32-65e5-453a-a56e-a95411e80db0/util/0.log" Dec 10 12:57:58 crc kubenswrapper[4689]: I1210 12:57:58.264851 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45_c5985c32-65e5-453a-a56e-a95411e80db0/util/0.log" Dec 10 12:57:58 crc kubenswrapper[4689]: I1210 12:57:58.270680 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45_c5985c32-65e5-453a-a56e-a95411e80db0/pull/0.log" Dec 10 12:57:58 crc kubenswrapper[4689]: I1210 12:57:58.303514 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45_c5985c32-65e5-453a-a56e-a95411e80db0/pull/0.log" Dec 10 12:57:58 crc kubenswrapper[4689]: I1210 12:57:58.478997 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45_c5985c32-65e5-453a-a56e-a95411e80db0/util/0.log" Dec 10 12:57:58 crc kubenswrapper[4689]: I1210 12:57:58.510508 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45_c5985c32-65e5-453a-a56e-a95411e80db0/extract/0.log" Dec 10 12:57:58 crc kubenswrapper[4689]: I1210 12:57:58.511718 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_941ec9b01b7a72127ba64306553d6766ab71ba59f32f9ca01c76932a62hvf45_c5985c32-65e5-453a-a56e-a95411e80db0/pull/0.log" Dec 10 12:57:58 crc kubenswrapper[4689]: I1210 12:57:58.688359 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-46w9f_ec9c74bb-c8dc-409b-817c-74963a395df8/kube-rbac-proxy/0.log" Dec 10 12:57:58 crc kubenswrapper[4689]: I1210 12:57:58.730838 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-4hdrw_e6db2b03-cb28-4161-bae6-6eecce28c871/kube-rbac-proxy/0.log" Dec 10 12:57:58 crc kubenswrapper[4689]: I1210 12:57:58.767321 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-46w9f_ec9c74bb-c8dc-409b-817c-74963a395df8/manager/0.log" Dec 10 12:57:58 crc kubenswrapper[4689]: I1210 12:57:58.901568 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-4hdrw_e6db2b03-cb28-4161-bae6-6eecce28c871/manager/0.log" Dec 10 12:57:58 crc kubenswrapper[4689]: I1210 12:57:58.970018 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-dtclt_0b413153-162d-46ad-9b9a-b44869127ee7/kube-rbac-proxy/0.log" Dec 10 12:57:58 crc kubenswrapper[4689]: I1210 12:57:58.971448 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-dtclt_0b413153-162d-46ad-9b9a-b44869127ee7/manager/0.log" Dec 10 12:57:59 crc kubenswrapper[4689]: I1210 12:57:59.118697 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-tzbn5_670989a1-8b21-473a-8624-862930a7d70b/kube-rbac-proxy/0.log" Dec 10 12:57:59 crc kubenswrapper[4689]: I1210 12:57:59.241645 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-tzbn5_670989a1-8b21-473a-8624-862930a7d70b/manager/0.log" Dec 10 12:57:59 crc kubenswrapper[4689]: I1210 12:57:59.301284 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-6fmnw_e9f4ae72-b49e-4144-a49b-72c2bbd1b77c/manager/0.log" Dec 10 12:57:59 crc kubenswrapper[4689]: I1210 12:57:59.318214 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-6fmnw_e9f4ae72-b49e-4144-a49b-72c2bbd1b77c/kube-rbac-proxy/0.log" Dec 10 12:57:59 crc kubenswrapper[4689]: I1210 12:57:59.416988 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-gk2zb_c70e0866-b017-4945-9e4b-c69eec327948/kube-rbac-proxy/0.log" Dec 10 12:57:59 crc kubenswrapper[4689]: I1210 12:57:59.502486 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-gk2zb_c70e0866-b017-4945-9e4b-c69eec327948/manager/0.log" Dec 10 12:57:59 crc kubenswrapper[4689]: I1210 12:57:59.585083 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-lkvwb_13ec50ac-3e46-4615-88f9-070c7a647158/kube-rbac-proxy/0.log" Dec 10 12:57:59 crc kubenswrapper[4689]: I1210 12:57:59.716753 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69f4484999-lb8n4_e1c471e3-8ecc-4db9-95c0-a4a13e287aba/kube-rbac-proxy/0.log" Dec 10 12:57:59 crc kubenswrapper[4689]: I1210 12:57:59.824696 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-lkvwb_13ec50ac-3e46-4615-88f9-070c7a647158/manager/0.log" Dec 10 12:57:59 crc kubenswrapper[4689]: I1210 12:57:59.878630 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69f4484999-lb8n4_e1c471e3-8ecc-4db9-95c0-a4a13e287aba/manager/0.log" Dec 10 12:57:59 crc kubenswrapper[4689]: I1210 12:57:59.986738 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-59fd99cc6f-gdgcj_c346bbde-239c-4f76-91da-c4116ad0a487/kube-rbac-proxy/0.log" Dec 10 12:58:00 crc kubenswrapper[4689]: I1210 12:58:00.066185 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-59fd99cc6f-gdgcj_c346bbde-239c-4f76-91da-c4116ad0a487/manager/0.log" Dec 10 12:58:00 crc kubenswrapper[4689]: I1210 12:58:00.135838 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-n69ks_4dc52122-5456-453a-9d5f-d2fce910bb61/kube-rbac-proxy/0.log" Dec 10 12:58:00 crc kubenswrapper[4689]: I1210 12:58:00.206201 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-n69ks_4dc52122-5456-453a-9d5f-d2fce910bb61/manager/0.log" Dec 10 12:58:00 crc kubenswrapper[4689]: I1210 12:58:00.321045 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-rwg47_8209377b-970c-4faf-ac5b-1e429d2bdccd/kube-rbac-proxy/0.log" Dec 10 12:58:00 crc kubenswrapper[4689]: I1210 12:58:00.322628 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-rwg47_8209377b-970c-4faf-ac5b-1e429d2bdccd/manager/0.log" Dec 10 12:58:00 crc kubenswrapper[4689]: I1210 12:58:00.476185 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-66xh9_7ca7fc40-0bb8-402f-9e73-d1d267340b28/kube-rbac-proxy/0.log" Dec 10 12:58:00 crc kubenswrapper[4689]: I1210 12:58:00.552825 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-66xh9_7ca7fc40-0bb8-402f-9e73-d1d267340b28/manager/0.log" Dec 10 12:58:00 crc kubenswrapper[4689]: I1210 12:58:00.616174 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-qlxtq_dd93e8ba-afd7-4d03-917f-873352cfefc8/kube-rbac-proxy/0.log" Dec 10 12:58:00 crc kubenswrapper[4689]: I1210 12:58:00.751805 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-qlxtq_dd93e8ba-afd7-4d03-917f-873352cfefc8/manager/0.log" Dec 10 12:58:00 crc kubenswrapper[4689]: I1210 12:58:00.766609 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-pwvc2_04286780-356c-4f76-9168-5a80c36d2aa3/kube-rbac-proxy/0.log" Dec 10 12:58:00 crc kubenswrapper[4689]: I1210 12:58:00.839067 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-pwvc2_04286780-356c-4f76-9168-5a80c36d2aa3/manager/0.log" Dec 10 12:58:00 crc kubenswrapper[4689]: I1210 12:58:00.934253 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879fwx7br_3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a/kube-rbac-proxy/0.log" Dec 10 12:58:00 crc kubenswrapper[4689]: I1210 12:58:00.964633 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879fwx7br_3ab4495a-ff87-4ae7-b343-17ff4dcfeb5a/manager/0.log" Dec 10 12:58:01 crc kubenswrapper[4689]: I1210 12:58:01.421746 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-4crnk_baa246ec-275c-44ad-9e71-8aace1bf29b0/registry-server/0.log" Dec 10 12:58:01 crc kubenswrapper[4689]: I1210 12:58:01.445671 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-6476f95cff-ktc7c_a138024c-6885-4a4b-abc6-e4cec00348d6/operator/0.log" Dec 10 12:58:01 crc kubenswrapper[4689]: I1210 12:58:01.605656 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-v7wvl_8c68b5ee-e36f-428f-8b70-480581f7e120/kube-rbac-proxy/0.log" Dec 10 12:58:01 crc kubenswrapper[4689]: I1210 12:58:01.699767 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-v7wvl_8c68b5ee-e36f-428f-8b70-480581f7e120/manager/0.log" Dec 10 12:58:01 crc kubenswrapper[4689]: I1210 12:58:01.890472 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-44v2s_e616a259-dbf9-469a-987e-b3a6f36044a4/kube-rbac-proxy/0.log" Dec 10 12:58:01 crc kubenswrapper[4689]: I1210 12:58:01.924454 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-44v2s_e616a259-dbf9-469a-987e-b3a6f36044a4/manager/0.log" Dec 10 12:58:01 crc kubenswrapper[4689]: I1210 12:58:01.979894 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-czdwk_38a45de6-7988-4cb1-86b8-0164c52f2dc5/operator/0.log" Dec 10 12:58:02 crc kubenswrapper[4689]: I1210 12:58:02.049329 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5997c5ddf6-h459p_af7cc69a-a411-43ec-b32e-41e6a343388b/manager/0.log" Dec 10 12:58:02 crc kubenswrapper[4689]: I1210 12:58:02.176618 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-d995k_08c0b74f-cdff-4fc6-b2cf-4d61fdd4177d/kube-rbac-proxy/0.log" Dec 10 12:58:02 crc kubenswrapper[4689]: I1210 12:58:02.217197 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-d995k_08c0b74f-cdff-4fc6-b2cf-4d61fdd4177d/manager/0.log" Dec 10 12:58:02 crc kubenswrapper[4689]: I1210 12:58:02.245464 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58d5ff84df-8xrb5_9e2487e5-677b-4344-9ab2-d419e03876f2/kube-rbac-proxy/0.log" Dec 10 12:58:02 crc kubenswrapper[4689]: I1210 12:58:02.399738 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-fghxw_20ec28ba-f929-4e94-833b-24a213da89a6/kube-rbac-proxy/0.log" Dec 10 12:58:02 crc kubenswrapper[4689]: I1210 12:58:02.400213 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58d5ff84df-8xrb5_9e2487e5-677b-4344-9ab2-d419e03876f2/manager/0.log" Dec 10 12:58:02 crc kubenswrapper[4689]: I1210 12:58:02.421586 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-fghxw_20ec28ba-f929-4e94-833b-24a213da89a6/manager/0.log" Dec 10 12:58:02 crc kubenswrapper[4689]: I1210 12:58:02.562197 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75944c9b7-rq8x6_88c6fb38-fa6d-497e-87c7-32833f1b5a04/kube-rbac-proxy/0.log" Dec 10 12:58:02 crc kubenswrapper[4689]: I1210 12:58:02.593884 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75944c9b7-rq8x6_88c6fb38-fa6d-497e-87c7-32833f1b5a04/manager/0.log" Dec 10 12:58:20 crc kubenswrapper[4689]: I1210 12:58:20.633753 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-rn6lb_38480cb6-15a9-450a-9efd-71a7d346ef7c/control-plane-machine-set-operator/0.log" Dec 10 12:58:20 crc kubenswrapper[4689]: I1210 12:58:20.764728 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-s8lmw_3eece420-e32c-4ea1-91f8-cc96bf144467/kube-rbac-proxy/0.log" Dec 10 12:58:20 crc kubenswrapper[4689]: I1210 12:58:20.796199 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-s8lmw_3eece420-e32c-4ea1-91f8-cc96bf144467/machine-api-operator/0.log" Dec 10 12:58:32 crc kubenswrapper[4689]: I1210 12:58:32.797941 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-w9qvl_d40a3a14-dc06-4eb8-91fb-3d624202b9bb/cert-manager-controller/0.log" Dec 10 12:58:32 crc kubenswrapper[4689]: I1210 12:58:32.926545 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-bwjp8_9eb2376e-1c8b-4a82-b570-f2f7d8faa957/cert-manager-cainjector/0.log" Dec 10 12:58:32 crc kubenswrapper[4689]: I1210 12:58:32.993619 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-v7z9w_66bcc49d-7116-4b58-b150-65dc5f00678a/cert-manager-webhook/0.log" Dec 10 12:58:44 crc kubenswrapper[4689]: I1210 12:58:44.962616 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-97b9n_b3a78d11-7f23-4c35-aee3-0b2a7a19a041/nmstate-console-plugin/0.log" Dec 10 12:58:45 crc kubenswrapper[4689]: I1210 12:58:45.102128 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-mvj78_7f919f52-0709-4db5-8158-8be0da507d54/nmstate-handler/0.log" Dec 10 12:58:45 crc kubenswrapper[4689]: I1210 12:58:45.161691 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-sf9vn_dfc59f59-cc82-4d61-ac2b-e9dbf2b0f5fd/kube-rbac-proxy/0.log" Dec 10 12:58:45 crc kubenswrapper[4689]: I1210 12:58:45.171632 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-sf9vn_dfc59f59-cc82-4d61-ac2b-e9dbf2b0f5fd/nmstate-metrics/0.log" Dec 10 12:58:45 crc kubenswrapper[4689]: I1210 12:58:45.314852 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-5ffhx_2c83e0d9-b9ea-4b8a-9f11-7921eff53640/nmstate-operator/0.log" Dec 10 12:58:45 crc kubenswrapper[4689]: I1210 12:58:45.383198 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-wghvl_6b4e153d-87c2-4ebb-b47c-77f12331ab68/nmstate-webhook/0.log" Dec 10 12:58:59 crc kubenswrapper[4689]: I1210 12:58:59.017775 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-fw4x8_1f684039-0afb-44b1-a206-83d818ab3f9b/kube-rbac-proxy/0.log" Dec 10 12:58:59 crc kubenswrapper[4689]: I1210 12:58:59.120166 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-fw4x8_1f684039-0afb-44b1-a206-83d818ab3f9b/controller/0.log" Dec 10 12:58:59 crc kubenswrapper[4689]: I1210 12:58:59.175370 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/cp-frr-files/0.log" Dec 10 12:58:59 crc kubenswrapper[4689]: I1210 12:58:59.375326 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/cp-frr-files/0.log" Dec 10 12:58:59 crc kubenswrapper[4689]: I1210 12:58:59.406724 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/cp-reloader/0.log" Dec 10 12:58:59 crc kubenswrapper[4689]: I1210 12:58:59.430620 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/cp-metrics/0.log" Dec 10 12:58:59 crc kubenswrapper[4689]: I1210 12:58:59.440465 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/cp-reloader/0.log" Dec 10 12:58:59 crc kubenswrapper[4689]: I1210 12:58:59.613401 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/cp-reloader/0.log" Dec 10 12:58:59 crc kubenswrapper[4689]: I1210 12:58:59.621618 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/cp-metrics/0.log" Dec 10 12:58:59 crc kubenswrapper[4689]: I1210 12:58:59.626109 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/cp-frr-files/0.log" Dec 10 12:58:59 crc kubenswrapper[4689]: I1210 12:58:59.641588 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/cp-metrics/0.log" Dec 10 12:58:59 crc kubenswrapper[4689]: I1210 12:58:59.802127 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/cp-frr-files/0.log" Dec 10 12:58:59 crc kubenswrapper[4689]: I1210 12:58:59.806374 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/cp-reloader/0.log" Dec 10 12:58:59 crc kubenswrapper[4689]: I1210 12:58:59.821340 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/controller/0.log" Dec 10 12:58:59 crc kubenswrapper[4689]: I1210 12:58:59.831746 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/cp-metrics/0.log" Dec 10 12:58:59 crc kubenswrapper[4689]: I1210 12:58:59.991369 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/frr-metrics/0.log" Dec 10 12:59:00 crc kubenswrapper[4689]: I1210 12:59:00.005868 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/kube-rbac-proxy/0.log" Dec 10 12:59:00 crc kubenswrapper[4689]: I1210 12:59:00.044045 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/kube-rbac-proxy-frr/0.log" Dec 10 12:59:00 crc kubenswrapper[4689]: I1210 12:59:00.193910 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/reloader/0.log" Dec 10 12:59:00 crc kubenswrapper[4689]: I1210 12:59:00.240354 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-9pghn_da40a320-7203-4d3b-bc0e-c9eb09a07898/frr-k8s-webhook-server/0.log" Dec 10 12:59:00 crc kubenswrapper[4689]: I1210 12:59:00.513560 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-69d5767795-46q9h_5c50fea7-15a9-4027-9f2f-14c3744c7533/manager/0.log" Dec 10 12:59:00 crc kubenswrapper[4689]: I1210 12:59:00.587133 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5d9dc5cf54-24c7k_9d6f0612-90c7-4c71-a751-850ab873444a/webhook-server/0.log" Dec 10 12:59:00 crc kubenswrapper[4689]: I1210 12:59:00.818122 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-xx5ts_b8a0c114-08fb-4583-a9eb-09333023b0ed/kube-rbac-proxy/0.log" Dec 10 12:59:01 crc kubenswrapper[4689]: I1210 12:59:01.069465 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9t496_3b6f82d6-ef74-4c0e-99ab-426f3b10334f/frr/0.log" Dec 10 12:59:01 crc kubenswrapper[4689]: I1210 12:59:01.133085 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-xx5ts_b8a0c114-08fb-4583-a9eb-09333023b0ed/speaker/0.log" Dec 10 12:59:13 crc kubenswrapper[4689]: I1210 12:59:13.790669 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww_0d799972-d396-4dd5-8186-87f0a51ea145/util/0.log" Dec 10 12:59:13 crc kubenswrapper[4689]: I1210 12:59:13.944674 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww_0d799972-d396-4dd5-8186-87f0a51ea145/pull/0.log" Dec 10 12:59:13 crc kubenswrapper[4689]: I1210 12:59:13.960905 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww_0d799972-d396-4dd5-8186-87f0a51ea145/util/0.log" Dec 10 12:59:14 crc kubenswrapper[4689]: I1210 12:59:14.027205 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww_0d799972-d396-4dd5-8186-87f0a51ea145/pull/0.log" Dec 10 12:59:14 crc kubenswrapper[4689]: I1210 12:59:14.182811 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww_0d799972-d396-4dd5-8186-87f0a51ea145/pull/0.log" Dec 10 12:59:14 crc kubenswrapper[4689]: I1210 12:59:14.201482 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww_0d799972-d396-4dd5-8186-87f0a51ea145/extract/0.log" Dec 10 12:59:14 crc kubenswrapper[4689]: I1210 12:59:14.220177 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212flfxww_0d799972-d396-4dd5-8186-87f0a51ea145/util/0.log" Dec 10 12:59:14 crc kubenswrapper[4689]: I1210 12:59:14.363845 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd_c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb/util/0.log" Dec 10 12:59:14 crc kubenswrapper[4689]: I1210 12:59:14.556218 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd_c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb/pull/0.log" Dec 10 12:59:14 crc kubenswrapper[4689]: I1210 12:59:14.557809 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd_c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb/pull/0.log" Dec 10 12:59:14 crc kubenswrapper[4689]: I1210 12:59:14.558163 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd_c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb/util/0.log" Dec 10 12:59:14 crc kubenswrapper[4689]: I1210 12:59:14.718453 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd_c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb/util/0.log" Dec 10 12:59:14 crc kubenswrapper[4689]: I1210 12:59:14.746414 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd_c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb/pull/0.log" Dec 10 12:59:14 crc kubenswrapper[4689]: I1210 12:59:14.749482 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83frvcd_c79ff6a1-8c4b-4d40-af83-b6aaa022f1fb/extract/0.log" Dec 10 12:59:14 crc kubenswrapper[4689]: I1210 12:59:14.893600 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9wjtp_557f01fb-8032-4d30-b987-9c514445498f/extract-utilities/0.log" Dec 10 12:59:15 crc kubenswrapper[4689]: I1210 12:59:15.111185 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9wjtp_557f01fb-8032-4d30-b987-9c514445498f/extract-content/0.log" Dec 10 12:59:15 crc kubenswrapper[4689]: I1210 12:59:15.117395 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9wjtp_557f01fb-8032-4d30-b987-9c514445498f/extract-utilities/0.log" Dec 10 12:59:15 crc kubenswrapper[4689]: I1210 12:59:15.122665 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9wjtp_557f01fb-8032-4d30-b987-9c514445498f/extract-content/0.log" Dec 10 12:59:15 crc kubenswrapper[4689]: I1210 12:59:15.238457 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9wjtp_557f01fb-8032-4d30-b987-9c514445498f/extract-utilities/0.log" Dec 10 12:59:15 crc kubenswrapper[4689]: I1210 12:59:15.297725 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9wjtp_557f01fb-8032-4d30-b987-9c514445498f/extract-content/0.log" Dec 10 12:59:15 crc kubenswrapper[4689]: I1210 12:59:15.464864 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vj842_f8b91596-d292-4ef0-bb0d-92cb3224c20c/extract-utilities/0.log" Dec 10 12:59:15 crc kubenswrapper[4689]: I1210 12:59:15.493733 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9wjtp_557f01fb-8032-4d30-b987-9c514445498f/registry-server/0.log" Dec 10 12:59:15 crc kubenswrapper[4689]: I1210 12:59:15.632937 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vj842_f8b91596-d292-4ef0-bb0d-92cb3224c20c/extract-content/0.log" Dec 10 12:59:15 crc kubenswrapper[4689]: I1210 12:59:15.658187 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vj842_f8b91596-d292-4ef0-bb0d-92cb3224c20c/extract-content/0.log" Dec 10 12:59:15 crc kubenswrapper[4689]: I1210 12:59:15.679462 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vj842_f8b91596-d292-4ef0-bb0d-92cb3224c20c/extract-utilities/0.log" Dec 10 12:59:15 crc kubenswrapper[4689]: I1210 12:59:15.853745 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vj842_f8b91596-d292-4ef0-bb0d-92cb3224c20c/extract-utilities/0.log" Dec 10 12:59:15 crc kubenswrapper[4689]: I1210 12:59:15.858574 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vj842_f8b91596-d292-4ef0-bb0d-92cb3224c20c/extract-content/0.log" Dec 10 12:59:16 crc kubenswrapper[4689]: I1210 12:59:16.081181 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ckfqs_2fe57bc1-cf21-44d7-b6ca-54319a03a415/marketplace-operator/0.log" Dec 10 12:59:16 crc kubenswrapper[4689]: I1210 12:59:16.194508 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9w8ct_2063bbc8-0509-4956-8dc9-84e8469be8f9/extract-utilities/0.log" Dec 10 12:59:16 crc kubenswrapper[4689]: I1210 12:59:16.256695 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vj842_f8b91596-d292-4ef0-bb0d-92cb3224c20c/registry-server/0.log" Dec 10 12:59:16 crc kubenswrapper[4689]: I1210 12:59:16.364958 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9w8ct_2063bbc8-0509-4956-8dc9-84e8469be8f9/extract-content/0.log" Dec 10 12:59:16 crc kubenswrapper[4689]: I1210 12:59:16.378495 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9w8ct_2063bbc8-0509-4956-8dc9-84e8469be8f9/extract-content/0.log" Dec 10 12:59:16 crc kubenswrapper[4689]: I1210 12:59:16.380558 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9w8ct_2063bbc8-0509-4956-8dc9-84e8469be8f9/extract-utilities/0.log" Dec 10 12:59:16 crc kubenswrapper[4689]: I1210 12:59:16.583610 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9w8ct_2063bbc8-0509-4956-8dc9-84e8469be8f9/extract-content/0.log" Dec 10 12:59:16 crc kubenswrapper[4689]: I1210 12:59:16.587299 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9w8ct_2063bbc8-0509-4956-8dc9-84e8469be8f9/extract-utilities/0.log" Dec 10 12:59:16 crc kubenswrapper[4689]: I1210 12:59:16.706380 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9w8ct_2063bbc8-0509-4956-8dc9-84e8469be8f9/registry-server/0.log" Dec 10 12:59:16 crc kubenswrapper[4689]: I1210 12:59:16.761169 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8844c_6c80fa47-76de-4730-aa6b-85bab40be273/extract-utilities/0.log" Dec 10 12:59:16 crc kubenswrapper[4689]: I1210 12:59:16.940110 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8844c_6c80fa47-76de-4730-aa6b-85bab40be273/extract-content/0.log" Dec 10 12:59:16 crc kubenswrapper[4689]: I1210 12:59:16.952110 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8844c_6c80fa47-76de-4730-aa6b-85bab40be273/extract-utilities/0.log" Dec 10 12:59:16 crc kubenswrapper[4689]: I1210 12:59:16.952170 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8844c_6c80fa47-76de-4730-aa6b-85bab40be273/extract-content/0.log" Dec 10 12:59:17 crc kubenswrapper[4689]: I1210 12:59:17.131055 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8844c_6c80fa47-76de-4730-aa6b-85bab40be273/extract-utilities/0.log" Dec 10 12:59:17 crc kubenswrapper[4689]: I1210 12:59:17.182426 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8844c_6c80fa47-76de-4730-aa6b-85bab40be273/extract-content/0.log" Dec 10 12:59:17 crc kubenswrapper[4689]: I1210 12:59:17.549945 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8844c_6c80fa47-76de-4730-aa6b-85bab40be273/registry-server/0.log" Dec 10 13:00:00 crc kubenswrapper[4689]: I1210 13:00:00.160619 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422860-nr7q6"] Dec 10 13:00:00 crc kubenswrapper[4689]: E1210 13:00:00.161668 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07a75a15-9c46-4004-9b19-d0e0d885f26d" containerName="container-00" Dec 10 13:00:00 crc kubenswrapper[4689]: I1210 13:00:00.161684 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="07a75a15-9c46-4004-9b19-d0e0d885f26d" containerName="container-00" Dec 10 13:00:00 crc kubenswrapper[4689]: I1210 13:00:00.161952 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="07a75a15-9c46-4004-9b19-d0e0d885f26d" containerName="container-00" Dec 10 13:00:00 crc kubenswrapper[4689]: I1210 13:00:00.162777 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422860-nr7q6" Dec 10 13:00:00 crc kubenswrapper[4689]: I1210 13:00:00.169057 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 10 13:00:00 crc kubenswrapper[4689]: I1210 13:00:00.169933 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 10 13:00:00 crc kubenswrapper[4689]: I1210 13:00:00.191473 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422860-nr7q6"] Dec 10 13:00:00 crc kubenswrapper[4689]: I1210 13:00:00.291203 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eaddfa05-46a2-45e8-a1fc-2e0266f29872-secret-volume\") pod \"collect-profiles-29422860-nr7q6\" (UID: \"eaddfa05-46a2-45e8-a1fc-2e0266f29872\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422860-nr7q6" Dec 10 13:00:00 crc kubenswrapper[4689]: I1210 13:00:00.291492 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eaddfa05-46a2-45e8-a1fc-2e0266f29872-config-volume\") pod \"collect-profiles-29422860-nr7q6\" (UID: \"eaddfa05-46a2-45e8-a1fc-2e0266f29872\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422860-nr7q6" Dec 10 13:00:00 crc kubenswrapper[4689]: I1210 13:00:00.291685 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf48j\" (UniqueName: \"kubernetes.io/projected/eaddfa05-46a2-45e8-a1fc-2e0266f29872-kube-api-access-hf48j\") pod \"collect-profiles-29422860-nr7q6\" (UID: \"eaddfa05-46a2-45e8-a1fc-2e0266f29872\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422860-nr7q6" Dec 10 13:00:00 crc kubenswrapper[4689]: I1210 13:00:00.393890 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eaddfa05-46a2-45e8-a1fc-2e0266f29872-config-volume\") pod \"collect-profiles-29422860-nr7q6\" (UID: \"eaddfa05-46a2-45e8-a1fc-2e0266f29872\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422860-nr7q6" Dec 10 13:00:00 crc kubenswrapper[4689]: I1210 13:00:00.394244 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf48j\" (UniqueName: \"kubernetes.io/projected/eaddfa05-46a2-45e8-a1fc-2e0266f29872-kube-api-access-hf48j\") pod \"collect-profiles-29422860-nr7q6\" (UID: \"eaddfa05-46a2-45e8-a1fc-2e0266f29872\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422860-nr7q6" Dec 10 13:00:00 crc kubenswrapper[4689]: I1210 13:00:00.394492 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eaddfa05-46a2-45e8-a1fc-2e0266f29872-secret-volume\") pod \"collect-profiles-29422860-nr7q6\" (UID: \"eaddfa05-46a2-45e8-a1fc-2e0266f29872\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422860-nr7q6" Dec 10 13:00:00 crc kubenswrapper[4689]: I1210 13:00:00.395175 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eaddfa05-46a2-45e8-a1fc-2e0266f29872-config-volume\") pod \"collect-profiles-29422860-nr7q6\" (UID: \"eaddfa05-46a2-45e8-a1fc-2e0266f29872\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422860-nr7q6" Dec 10 13:00:00 crc kubenswrapper[4689]: I1210 13:00:00.414004 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eaddfa05-46a2-45e8-a1fc-2e0266f29872-secret-volume\") pod \"collect-profiles-29422860-nr7q6\" (UID: \"eaddfa05-46a2-45e8-a1fc-2e0266f29872\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422860-nr7q6" Dec 10 13:00:00 crc kubenswrapper[4689]: I1210 13:00:00.415168 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf48j\" (UniqueName: \"kubernetes.io/projected/eaddfa05-46a2-45e8-a1fc-2e0266f29872-kube-api-access-hf48j\") pod \"collect-profiles-29422860-nr7q6\" (UID: \"eaddfa05-46a2-45e8-a1fc-2e0266f29872\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422860-nr7q6" Dec 10 13:00:00 crc kubenswrapper[4689]: I1210 13:00:00.481838 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422860-nr7q6" Dec 10 13:00:00 crc kubenswrapper[4689]: I1210 13:00:00.938652 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422860-nr7q6"] Dec 10 13:00:01 crc kubenswrapper[4689]: I1210 13:00:01.364441 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422860-nr7q6" event={"ID":"eaddfa05-46a2-45e8-a1fc-2e0266f29872","Type":"ContainerStarted","Data":"b6b3c4bbd193de0cb7594413c4f3fd22c9b74168b82c6e5ce1ebce54db9dde34"} Dec 10 13:00:01 crc kubenswrapper[4689]: I1210 13:00:01.364709 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422860-nr7q6" event={"ID":"eaddfa05-46a2-45e8-a1fc-2e0266f29872","Type":"ContainerStarted","Data":"907bd836342eb6ba98448f50aff7b8c31d131bdf3f2a71e107e5252f9da28974"} Dec 10 13:00:01 crc kubenswrapper[4689]: I1210 13:00:01.396575 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29422860-nr7q6" podStartSLOduration=1.396556471 podStartE2EDuration="1.396556471s" podCreationTimestamp="2025-12-10 13:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 13:00:01.387501721 +0000 UTC m=+2669.175582859" watchObservedRunningTime="2025-12-10 13:00:01.396556471 +0000 UTC m=+2669.184637609" Dec 10 13:00:02 crc kubenswrapper[4689]: I1210 13:00:02.372589 4689 generic.go:334] "Generic (PLEG): container finished" podID="eaddfa05-46a2-45e8-a1fc-2e0266f29872" containerID="b6b3c4bbd193de0cb7594413c4f3fd22c9b74168b82c6e5ce1ebce54db9dde34" exitCode=0 Dec 10 13:00:02 crc kubenswrapper[4689]: I1210 13:00:02.372701 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422860-nr7q6" event={"ID":"eaddfa05-46a2-45e8-a1fc-2e0266f29872","Type":"ContainerDied","Data":"b6b3c4bbd193de0cb7594413c4f3fd22c9b74168b82c6e5ce1ebce54db9dde34"} Dec 10 13:00:03 crc kubenswrapper[4689]: I1210 13:00:03.749525 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422860-nr7q6" Dec 10 13:00:03 crc kubenswrapper[4689]: I1210 13:00:03.864449 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eaddfa05-46a2-45e8-a1fc-2e0266f29872-secret-volume\") pod \"eaddfa05-46a2-45e8-a1fc-2e0266f29872\" (UID: \"eaddfa05-46a2-45e8-a1fc-2e0266f29872\") " Dec 10 13:00:03 crc kubenswrapper[4689]: I1210 13:00:03.864497 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf48j\" (UniqueName: \"kubernetes.io/projected/eaddfa05-46a2-45e8-a1fc-2e0266f29872-kube-api-access-hf48j\") pod \"eaddfa05-46a2-45e8-a1fc-2e0266f29872\" (UID: \"eaddfa05-46a2-45e8-a1fc-2e0266f29872\") " Dec 10 13:00:03 crc kubenswrapper[4689]: I1210 13:00:03.864527 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eaddfa05-46a2-45e8-a1fc-2e0266f29872-config-volume\") pod \"eaddfa05-46a2-45e8-a1fc-2e0266f29872\" (UID: \"eaddfa05-46a2-45e8-a1fc-2e0266f29872\") " Dec 10 13:00:03 crc kubenswrapper[4689]: I1210 13:00:03.865499 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaddfa05-46a2-45e8-a1fc-2e0266f29872-config-volume" (OuterVolumeSpecName: "config-volume") pod "eaddfa05-46a2-45e8-a1fc-2e0266f29872" (UID: "eaddfa05-46a2-45e8-a1fc-2e0266f29872"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 13:00:03 crc kubenswrapper[4689]: I1210 13:00:03.869849 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaddfa05-46a2-45e8-a1fc-2e0266f29872-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "eaddfa05-46a2-45e8-a1fc-2e0266f29872" (UID: "eaddfa05-46a2-45e8-a1fc-2e0266f29872"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 13:00:03 crc kubenswrapper[4689]: I1210 13:00:03.869920 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaddfa05-46a2-45e8-a1fc-2e0266f29872-kube-api-access-hf48j" (OuterVolumeSpecName: "kube-api-access-hf48j") pod "eaddfa05-46a2-45e8-a1fc-2e0266f29872" (UID: "eaddfa05-46a2-45e8-a1fc-2e0266f29872"). InnerVolumeSpecName "kube-api-access-hf48j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 13:00:03 crc kubenswrapper[4689]: I1210 13:00:03.967107 4689 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eaddfa05-46a2-45e8-a1fc-2e0266f29872-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 10 13:00:03 crc kubenswrapper[4689]: I1210 13:00:03.967150 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf48j\" (UniqueName: \"kubernetes.io/projected/eaddfa05-46a2-45e8-a1fc-2e0266f29872-kube-api-access-hf48j\") on node \"crc\" DevicePath \"\"" Dec 10 13:00:03 crc kubenswrapper[4689]: I1210 13:00:03.967160 4689 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eaddfa05-46a2-45e8-a1fc-2e0266f29872-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 13:00:04 crc kubenswrapper[4689]: I1210 13:00:04.393195 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422860-nr7q6" event={"ID":"eaddfa05-46a2-45e8-a1fc-2e0266f29872","Type":"ContainerDied","Data":"907bd836342eb6ba98448f50aff7b8c31d131bdf3f2a71e107e5252f9da28974"} Dec 10 13:00:04 crc kubenswrapper[4689]: I1210 13:00:04.393536 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="907bd836342eb6ba98448f50aff7b8c31d131bdf3f2a71e107e5252f9da28974" Dec 10 13:00:04 crc kubenswrapper[4689]: I1210 13:00:04.393245 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422860-nr7q6" Dec 10 13:00:04 crc kubenswrapper[4689]: I1210 13:00:04.517805 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422815-gckmk"] Dec 10 13:00:04 crc kubenswrapper[4689]: I1210 13:00:04.532160 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422815-gckmk"] Dec 10 13:00:06 crc kubenswrapper[4689]: I1210 13:00:06.514752 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dca2df7-7c4c-40bb-8302-d6c089fd5486" path="/var/lib/kubelet/pods/5dca2df7-7c4c-40bb-8302-d6c089fd5486/volumes" Dec 10 13:00:07 crc kubenswrapper[4689]: I1210 13:00:07.167247 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 13:00:07 crc kubenswrapper[4689]: I1210 13:00:07.167325 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 13:00:37 crc kubenswrapper[4689]: I1210 13:00:37.167357 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 13:00:37 crc kubenswrapper[4689]: I1210 13:00:37.167933 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 13:00:46 crc kubenswrapper[4689]: I1210 13:00:46.548196 4689 scope.go:117] "RemoveContainer" containerID="d554ce9f44128013a4ba643a6b440f84c6a3b3b7ede2530107890b9be885a80b" Dec 10 13:00:49 crc kubenswrapper[4689]: I1210 13:00:49.932399 4689 generic.go:334] "Generic (PLEG): container finished" podID="72ac4d6f-f8de-41e6-96fe-dccc36413b6d" containerID="d8af23709ce50ec6ec15f8a2edecfc467de0e72e321eb13c05fb600fb08be720" exitCode=0 Dec 10 13:00:49 crc kubenswrapper[4689]: I1210 13:00:49.932484 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w2nzt/must-gather-dd24r" event={"ID":"72ac4d6f-f8de-41e6-96fe-dccc36413b6d","Type":"ContainerDied","Data":"d8af23709ce50ec6ec15f8a2edecfc467de0e72e321eb13c05fb600fb08be720"} Dec 10 13:00:49 crc kubenswrapper[4689]: I1210 13:00:49.933520 4689 scope.go:117] "RemoveContainer" containerID="d8af23709ce50ec6ec15f8a2edecfc467de0e72e321eb13c05fb600fb08be720" Dec 10 13:00:50 crc kubenswrapper[4689]: I1210 13:00:50.745431 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-w2nzt_must-gather-dd24r_72ac4d6f-f8de-41e6-96fe-dccc36413b6d/gather/0.log" Dec 10 13:01:00 crc kubenswrapper[4689]: I1210 13:01:00.145435 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29422861-q8xsv"] Dec 10 13:01:00 crc kubenswrapper[4689]: E1210 13:01:00.146509 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaddfa05-46a2-45e8-a1fc-2e0266f29872" containerName="collect-profiles" Dec 10 13:01:00 crc kubenswrapper[4689]: I1210 13:01:00.146528 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaddfa05-46a2-45e8-a1fc-2e0266f29872" containerName="collect-profiles" Dec 10 13:01:00 crc kubenswrapper[4689]: I1210 13:01:00.146758 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaddfa05-46a2-45e8-a1fc-2e0266f29872" containerName="collect-profiles" Dec 10 13:01:00 crc kubenswrapper[4689]: I1210 13:01:00.147814 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29422861-q8xsv" Dec 10 13:01:00 crc kubenswrapper[4689]: I1210 13:01:00.156438 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29422861-q8xsv"] Dec 10 13:01:00 crc kubenswrapper[4689]: I1210 13:01:00.194241 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147dda36-36cc-4cf7-b4e2-627c58743cb0-config-data\") pod \"keystone-cron-29422861-q8xsv\" (UID: \"147dda36-36cc-4cf7-b4e2-627c58743cb0\") " pod="openstack/keystone-cron-29422861-q8xsv" Dec 10 13:01:00 crc kubenswrapper[4689]: I1210 13:01:00.194581 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bg7m\" (UniqueName: \"kubernetes.io/projected/147dda36-36cc-4cf7-b4e2-627c58743cb0-kube-api-access-7bg7m\") pod \"keystone-cron-29422861-q8xsv\" (UID: \"147dda36-36cc-4cf7-b4e2-627c58743cb0\") " pod="openstack/keystone-cron-29422861-q8xsv" Dec 10 13:01:00 crc kubenswrapper[4689]: I1210 13:01:00.194814 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/147dda36-36cc-4cf7-b4e2-627c58743cb0-fernet-keys\") pod \"keystone-cron-29422861-q8xsv\" (UID: \"147dda36-36cc-4cf7-b4e2-627c58743cb0\") " pod="openstack/keystone-cron-29422861-q8xsv" Dec 10 13:01:00 crc kubenswrapper[4689]: I1210 13:01:00.195006 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147dda36-36cc-4cf7-b4e2-627c58743cb0-combined-ca-bundle\") pod \"keystone-cron-29422861-q8xsv\" (UID: \"147dda36-36cc-4cf7-b4e2-627c58743cb0\") " pod="openstack/keystone-cron-29422861-q8xsv" Dec 10 13:01:00 crc kubenswrapper[4689]: I1210 13:01:00.295898 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147dda36-36cc-4cf7-b4e2-627c58743cb0-config-data\") pod \"keystone-cron-29422861-q8xsv\" (UID: \"147dda36-36cc-4cf7-b4e2-627c58743cb0\") " pod="openstack/keystone-cron-29422861-q8xsv" Dec 10 13:01:00 crc kubenswrapper[4689]: I1210 13:01:00.295959 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bg7m\" (UniqueName: \"kubernetes.io/projected/147dda36-36cc-4cf7-b4e2-627c58743cb0-kube-api-access-7bg7m\") pod \"keystone-cron-29422861-q8xsv\" (UID: \"147dda36-36cc-4cf7-b4e2-627c58743cb0\") " pod="openstack/keystone-cron-29422861-q8xsv" Dec 10 13:01:00 crc kubenswrapper[4689]: I1210 13:01:00.296069 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/147dda36-36cc-4cf7-b4e2-627c58743cb0-fernet-keys\") pod \"keystone-cron-29422861-q8xsv\" (UID: \"147dda36-36cc-4cf7-b4e2-627c58743cb0\") " pod="openstack/keystone-cron-29422861-q8xsv" Dec 10 13:01:00 crc kubenswrapper[4689]: I1210 13:01:00.296100 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147dda36-36cc-4cf7-b4e2-627c58743cb0-combined-ca-bundle\") pod \"keystone-cron-29422861-q8xsv\" (UID: \"147dda36-36cc-4cf7-b4e2-627c58743cb0\") " pod="openstack/keystone-cron-29422861-q8xsv" Dec 10 13:01:00 crc kubenswrapper[4689]: I1210 13:01:00.302898 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147dda36-36cc-4cf7-b4e2-627c58743cb0-config-data\") pod \"keystone-cron-29422861-q8xsv\" (UID: \"147dda36-36cc-4cf7-b4e2-627c58743cb0\") " pod="openstack/keystone-cron-29422861-q8xsv" Dec 10 13:01:00 crc kubenswrapper[4689]: I1210 13:01:00.303049 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147dda36-36cc-4cf7-b4e2-627c58743cb0-combined-ca-bundle\") pod \"keystone-cron-29422861-q8xsv\" (UID: \"147dda36-36cc-4cf7-b4e2-627c58743cb0\") " pod="openstack/keystone-cron-29422861-q8xsv" Dec 10 13:01:00 crc kubenswrapper[4689]: I1210 13:01:00.304678 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/147dda36-36cc-4cf7-b4e2-627c58743cb0-fernet-keys\") pod \"keystone-cron-29422861-q8xsv\" (UID: \"147dda36-36cc-4cf7-b4e2-627c58743cb0\") " pod="openstack/keystone-cron-29422861-q8xsv" Dec 10 13:01:00 crc kubenswrapper[4689]: I1210 13:01:00.311050 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bg7m\" (UniqueName: \"kubernetes.io/projected/147dda36-36cc-4cf7-b4e2-627c58743cb0-kube-api-access-7bg7m\") pod \"keystone-cron-29422861-q8xsv\" (UID: \"147dda36-36cc-4cf7-b4e2-627c58743cb0\") " pod="openstack/keystone-cron-29422861-q8xsv" Dec 10 13:01:00 crc kubenswrapper[4689]: I1210 13:01:00.476643 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29422861-q8xsv" Dec 10 13:01:00 crc kubenswrapper[4689]: I1210 13:01:00.951546 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29422861-q8xsv"] Dec 10 13:01:00 crc kubenswrapper[4689]: W1210 13:01:00.970577 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod147dda36_36cc_4cf7_b4e2_627c58743cb0.slice/crio-9cc42e4bd476b1b723aacaaa13b8d988769eaf6abd811a644e9e56ddadef7153 WatchSource:0}: Error finding container 9cc42e4bd476b1b723aacaaa13b8d988769eaf6abd811a644e9e56ddadef7153: Status 404 returned error can't find the container with id 9cc42e4bd476b1b723aacaaa13b8d988769eaf6abd811a644e9e56ddadef7153 Dec 10 13:01:01 crc kubenswrapper[4689]: I1210 13:01:01.063322 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29422861-q8xsv" event={"ID":"147dda36-36cc-4cf7-b4e2-627c58743cb0","Type":"ContainerStarted","Data":"9cc42e4bd476b1b723aacaaa13b8d988769eaf6abd811a644e9e56ddadef7153"} Dec 10 13:01:01 crc kubenswrapper[4689]: I1210 13:01:01.317702 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-w2nzt/must-gather-dd24r"] Dec 10 13:01:01 crc kubenswrapper[4689]: I1210 13:01:01.318312 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-w2nzt/must-gather-dd24r" podUID="72ac4d6f-f8de-41e6-96fe-dccc36413b6d" containerName="copy" containerID="cri-o://fe5a6ab8a70d3a58cb08368dbcaa63e17cc12297cd009e8529679ca260e66730" gracePeriod=2 Dec 10 13:01:01 crc kubenswrapper[4689]: I1210 13:01:01.325099 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-w2nzt/must-gather-dd24r"] Dec 10 13:01:01 crc kubenswrapper[4689]: I1210 13:01:01.899062 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-w2nzt_must-gather-dd24r_72ac4d6f-f8de-41e6-96fe-dccc36413b6d/copy/0.log" Dec 10 13:01:01 crc kubenswrapper[4689]: I1210 13:01:01.899508 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2nzt/must-gather-dd24r" Dec 10 13:01:02 crc kubenswrapper[4689]: I1210 13:01:02.030362 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jlhj\" (UniqueName: \"kubernetes.io/projected/72ac4d6f-f8de-41e6-96fe-dccc36413b6d-kube-api-access-4jlhj\") pod \"72ac4d6f-f8de-41e6-96fe-dccc36413b6d\" (UID: \"72ac4d6f-f8de-41e6-96fe-dccc36413b6d\") " Dec 10 13:01:02 crc kubenswrapper[4689]: I1210 13:01:02.030447 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/72ac4d6f-f8de-41e6-96fe-dccc36413b6d-must-gather-output\") pod \"72ac4d6f-f8de-41e6-96fe-dccc36413b6d\" (UID: \"72ac4d6f-f8de-41e6-96fe-dccc36413b6d\") " Dec 10 13:01:02 crc kubenswrapper[4689]: I1210 13:01:02.036873 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72ac4d6f-f8de-41e6-96fe-dccc36413b6d-kube-api-access-4jlhj" (OuterVolumeSpecName: "kube-api-access-4jlhj") pod "72ac4d6f-f8de-41e6-96fe-dccc36413b6d" (UID: "72ac4d6f-f8de-41e6-96fe-dccc36413b6d"). InnerVolumeSpecName "kube-api-access-4jlhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 13:01:02 crc kubenswrapper[4689]: I1210 13:01:02.072778 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29422861-q8xsv" event={"ID":"147dda36-36cc-4cf7-b4e2-627c58743cb0","Type":"ContainerStarted","Data":"32dd93181a9839857cd51a854c13ca0fb297575a431eaa1a5c7e742a846edc8d"} Dec 10 13:01:02 crc kubenswrapper[4689]: I1210 13:01:02.076941 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-w2nzt_must-gather-dd24r_72ac4d6f-f8de-41e6-96fe-dccc36413b6d/copy/0.log" Dec 10 13:01:02 crc kubenswrapper[4689]: I1210 13:01:02.077472 4689 generic.go:334] "Generic (PLEG): container finished" podID="72ac4d6f-f8de-41e6-96fe-dccc36413b6d" containerID="fe5a6ab8a70d3a58cb08368dbcaa63e17cc12297cd009e8529679ca260e66730" exitCode=143 Dec 10 13:01:02 crc kubenswrapper[4689]: I1210 13:01:02.077519 4689 scope.go:117] "RemoveContainer" containerID="fe5a6ab8a70d3a58cb08368dbcaa63e17cc12297cd009e8529679ca260e66730" Dec 10 13:01:02 crc kubenswrapper[4689]: I1210 13:01:02.077561 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2nzt/must-gather-dd24r" Dec 10 13:01:02 crc kubenswrapper[4689]: I1210 13:01:02.091951 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29422861-q8xsv" podStartSLOduration=2.091932562 podStartE2EDuration="2.091932562s" podCreationTimestamp="2025-12-10 13:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 13:01:02.087078603 +0000 UTC m=+2729.875159751" watchObservedRunningTime="2025-12-10 13:01:02.091932562 +0000 UTC m=+2729.880013720" Dec 10 13:01:02 crc kubenswrapper[4689]: I1210 13:01:02.129133 4689 scope.go:117] "RemoveContainer" containerID="d8af23709ce50ec6ec15f8a2edecfc467de0e72e321eb13c05fb600fb08be720" Dec 10 13:01:02 crc kubenswrapper[4689]: I1210 13:01:02.133493 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jlhj\" (UniqueName: \"kubernetes.io/projected/72ac4d6f-f8de-41e6-96fe-dccc36413b6d-kube-api-access-4jlhj\") on node \"crc\" DevicePath \"\"" Dec 10 13:01:02 crc kubenswrapper[4689]: I1210 13:01:02.207547 4689 scope.go:117] "RemoveContainer" containerID="fe5a6ab8a70d3a58cb08368dbcaa63e17cc12297cd009e8529679ca260e66730" Dec 10 13:01:02 crc kubenswrapper[4689]: E1210 13:01:02.207920 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe5a6ab8a70d3a58cb08368dbcaa63e17cc12297cd009e8529679ca260e66730\": container with ID starting with fe5a6ab8a70d3a58cb08368dbcaa63e17cc12297cd009e8529679ca260e66730 not found: ID does not exist" containerID="fe5a6ab8a70d3a58cb08368dbcaa63e17cc12297cd009e8529679ca260e66730" Dec 10 13:01:02 crc kubenswrapper[4689]: I1210 13:01:02.207949 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe5a6ab8a70d3a58cb08368dbcaa63e17cc12297cd009e8529679ca260e66730"} err="failed to get container status \"fe5a6ab8a70d3a58cb08368dbcaa63e17cc12297cd009e8529679ca260e66730\": rpc error: code = NotFound desc = could not find container \"fe5a6ab8a70d3a58cb08368dbcaa63e17cc12297cd009e8529679ca260e66730\": container with ID starting with fe5a6ab8a70d3a58cb08368dbcaa63e17cc12297cd009e8529679ca260e66730 not found: ID does not exist" Dec 10 13:01:02 crc kubenswrapper[4689]: I1210 13:01:02.207988 4689 scope.go:117] "RemoveContainer" containerID="d8af23709ce50ec6ec15f8a2edecfc467de0e72e321eb13c05fb600fb08be720" Dec 10 13:01:02 crc kubenswrapper[4689]: E1210 13:01:02.208612 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8af23709ce50ec6ec15f8a2edecfc467de0e72e321eb13c05fb600fb08be720\": container with ID starting with d8af23709ce50ec6ec15f8a2edecfc467de0e72e321eb13c05fb600fb08be720 not found: ID does not exist" containerID="d8af23709ce50ec6ec15f8a2edecfc467de0e72e321eb13c05fb600fb08be720" Dec 10 13:01:02 crc kubenswrapper[4689]: I1210 13:01:02.208635 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8af23709ce50ec6ec15f8a2edecfc467de0e72e321eb13c05fb600fb08be720"} err="failed to get container status \"d8af23709ce50ec6ec15f8a2edecfc467de0e72e321eb13c05fb600fb08be720\": rpc error: code = NotFound desc = could not find container \"d8af23709ce50ec6ec15f8a2edecfc467de0e72e321eb13c05fb600fb08be720\": container with ID starting with d8af23709ce50ec6ec15f8a2edecfc467de0e72e321eb13c05fb600fb08be720 not found: ID does not exist" Dec 10 13:01:02 crc kubenswrapper[4689]: I1210 13:01:02.246270 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72ac4d6f-f8de-41e6-96fe-dccc36413b6d-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "72ac4d6f-f8de-41e6-96fe-dccc36413b6d" (UID: "72ac4d6f-f8de-41e6-96fe-dccc36413b6d"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 13:01:02 crc kubenswrapper[4689]: I1210 13:01:02.337808 4689 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/72ac4d6f-f8de-41e6-96fe-dccc36413b6d-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 10 13:01:02 crc kubenswrapper[4689]: I1210 13:01:02.535316 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72ac4d6f-f8de-41e6-96fe-dccc36413b6d" path="/var/lib/kubelet/pods/72ac4d6f-f8de-41e6-96fe-dccc36413b6d/volumes" Dec 10 13:01:04 crc kubenswrapper[4689]: I1210 13:01:04.109347 4689 generic.go:334] "Generic (PLEG): container finished" podID="147dda36-36cc-4cf7-b4e2-627c58743cb0" containerID="32dd93181a9839857cd51a854c13ca0fb297575a431eaa1a5c7e742a846edc8d" exitCode=0 Dec 10 13:01:04 crc kubenswrapper[4689]: I1210 13:01:04.109411 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29422861-q8xsv" event={"ID":"147dda36-36cc-4cf7-b4e2-627c58743cb0","Type":"ContainerDied","Data":"32dd93181a9839857cd51a854c13ca0fb297575a431eaa1a5c7e742a846edc8d"} Dec 10 13:01:05 crc kubenswrapper[4689]: I1210 13:01:05.494300 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29422861-q8xsv" Dec 10 13:01:05 crc kubenswrapper[4689]: I1210 13:01:05.605554 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bg7m\" (UniqueName: \"kubernetes.io/projected/147dda36-36cc-4cf7-b4e2-627c58743cb0-kube-api-access-7bg7m\") pod \"147dda36-36cc-4cf7-b4e2-627c58743cb0\" (UID: \"147dda36-36cc-4cf7-b4e2-627c58743cb0\") " Dec 10 13:01:05 crc kubenswrapper[4689]: I1210 13:01:05.605653 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147dda36-36cc-4cf7-b4e2-627c58743cb0-combined-ca-bundle\") pod \"147dda36-36cc-4cf7-b4e2-627c58743cb0\" (UID: \"147dda36-36cc-4cf7-b4e2-627c58743cb0\") " Dec 10 13:01:05 crc kubenswrapper[4689]: I1210 13:01:05.605713 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/147dda36-36cc-4cf7-b4e2-627c58743cb0-fernet-keys\") pod \"147dda36-36cc-4cf7-b4e2-627c58743cb0\" (UID: \"147dda36-36cc-4cf7-b4e2-627c58743cb0\") " Dec 10 13:01:05 crc kubenswrapper[4689]: I1210 13:01:05.605816 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147dda36-36cc-4cf7-b4e2-627c58743cb0-config-data\") pod \"147dda36-36cc-4cf7-b4e2-627c58743cb0\" (UID: \"147dda36-36cc-4cf7-b4e2-627c58743cb0\") " Dec 10 13:01:05 crc kubenswrapper[4689]: I1210 13:01:05.612250 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/147dda36-36cc-4cf7-b4e2-627c58743cb0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "147dda36-36cc-4cf7-b4e2-627c58743cb0" (UID: "147dda36-36cc-4cf7-b4e2-627c58743cb0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 13:01:05 crc kubenswrapper[4689]: I1210 13:01:05.616192 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/147dda36-36cc-4cf7-b4e2-627c58743cb0-kube-api-access-7bg7m" (OuterVolumeSpecName: "kube-api-access-7bg7m") pod "147dda36-36cc-4cf7-b4e2-627c58743cb0" (UID: "147dda36-36cc-4cf7-b4e2-627c58743cb0"). InnerVolumeSpecName "kube-api-access-7bg7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 13:01:05 crc kubenswrapper[4689]: I1210 13:01:05.632781 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/147dda36-36cc-4cf7-b4e2-627c58743cb0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "147dda36-36cc-4cf7-b4e2-627c58743cb0" (UID: "147dda36-36cc-4cf7-b4e2-627c58743cb0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 13:01:05 crc kubenswrapper[4689]: I1210 13:01:05.675595 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/147dda36-36cc-4cf7-b4e2-627c58743cb0-config-data" (OuterVolumeSpecName: "config-data") pod "147dda36-36cc-4cf7-b4e2-627c58743cb0" (UID: "147dda36-36cc-4cf7-b4e2-627c58743cb0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 13:01:05 crc kubenswrapper[4689]: I1210 13:01:05.708086 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bg7m\" (UniqueName: \"kubernetes.io/projected/147dda36-36cc-4cf7-b4e2-627c58743cb0-kube-api-access-7bg7m\") on node \"crc\" DevicePath \"\"" Dec 10 13:01:05 crc kubenswrapper[4689]: I1210 13:01:05.708129 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147dda36-36cc-4cf7-b4e2-627c58743cb0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 13:01:05 crc kubenswrapper[4689]: I1210 13:01:05.708140 4689 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/147dda36-36cc-4cf7-b4e2-627c58743cb0-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 10 13:01:05 crc kubenswrapper[4689]: I1210 13:01:05.708148 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147dda36-36cc-4cf7-b4e2-627c58743cb0-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 13:01:06 crc kubenswrapper[4689]: I1210 13:01:06.128639 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29422861-q8xsv" event={"ID":"147dda36-36cc-4cf7-b4e2-627c58743cb0","Type":"ContainerDied","Data":"9cc42e4bd476b1b723aacaaa13b8d988769eaf6abd811a644e9e56ddadef7153"} Dec 10 13:01:06 crc kubenswrapper[4689]: I1210 13:01:06.128885 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cc42e4bd476b1b723aacaaa13b8d988769eaf6abd811a644e9e56ddadef7153" Dec 10 13:01:06 crc kubenswrapper[4689]: I1210 13:01:06.128884 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29422861-q8xsv" Dec 10 13:01:07 crc kubenswrapper[4689]: I1210 13:01:07.143797 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v9lh5"] Dec 10 13:01:07 crc kubenswrapper[4689]: E1210 13:01:07.144190 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72ac4d6f-f8de-41e6-96fe-dccc36413b6d" containerName="copy" Dec 10 13:01:07 crc kubenswrapper[4689]: I1210 13:01:07.144201 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="72ac4d6f-f8de-41e6-96fe-dccc36413b6d" containerName="copy" Dec 10 13:01:07 crc kubenswrapper[4689]: E1210 13:01:07.144216 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72ac4d6f-f8de-41e6-96fe-dccc36413b6d" containerName="gather" Dec 10 13:01:07 crc kubenswrapper[4689]: I1210 13:01:07.144222 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="72ac4d6f-f8de-41e6-96fe-dccc36413b6d" containerName="gather" Dec 10 13:01:07 crc kubenswrapper[4689]: E1210 13:01:07.144240 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="147dda36-36cc-4cf7-b4e2-627c58743cb0" containerName="keystone-cron" Dec 10 13:01:07 crc kubenswrapper[4689]: I1210 13:01:07.144246 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="147dda36-36cc-4cf7-b4e2-627c58743cb0" containerName="keystone-cron" Dec 10 13:01:07 crc kubenswrapper[4689]: I1210 13:01:07.144417 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="72ac4d6f-f8de-41e6-96fe-dccc36413b6d" containerName="copy" Dec 10 13:01:07 crc kubenswrapper[4689]: I1210 13:01:07.144449 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="147dda36-36cc-4cf7-b4e2-627c58743cb0" containerName="keystone-cron" Dec 10 13:01:07 crc kubenswrapper[4689]: I1210 13:01:07.144465 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="72ac4d6f-f8de-41e6-96fe-dccc36413b6d" containerName="gather" Dec 10 13:01:07 crc kubenswrapper[4689]: I1210 13:01:07.147511 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v9lh5" Dec 10 13:01:07 crc kubenswrapper[4689]: I1210 13:01:07.159598 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v9lh5"] Dec 10 13:01:07 crc kubenswrapper[4689]: I1210 13:01:07.166882 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 13:01:07 crc kubenswrapper[4689]: I1210 13:01:07.166937 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 13:01:07 crc kubenswrapper[4689]: I1210 13:01:07.167051 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" Dec 10 13:01:07 crc kubenswrapper[4689]: I1210 13:01:07.167824 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"601abc48853dbe73a4d877b8a15f4ad5bda2a51a5faa8c0ccd6d9050ac250293"} pod="openshift-machine-config-operator/machine-config-daemon-db6zk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 13:01:07 crc kubenswrapper[4689]: I1210 13:01:07.167888 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" containerID="cri-o://601abc48853dbe73a4d877b8a15f4ad5bda2a51a5faa8c0ccd6d9050ac250293" gracePeriod=600 Dec 10 13:01:07 crc kubenswrapper[4689]: I1210 13:01:07.236626 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23b8c10a-d274-4592-9130-4aaface4bb52-catalog-content\") pod \"redhat-operators-v9lh5\" (UID: \"23b8c10a-d274-4592-9130-4aaface4bb52\") " pod="openshift-marketplace/redhat-operators-v9lh5" Dec 10 13:01:07 crc kubenswrapper[4689]: I1210 13:01:07.236725 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk85m\" (UniqueName: \"kubernetes.io/projected/23b8c10a-d274-4592-9130-4aaface4bb52-kube-api-access-fk85m\") pod \"redhat-operators-v9lh5\" (UID: \"23b8c10a-d274-4592-9130-4aaface4bb52\") " pod="openshift-marketplace/redhat-operators-v9lh5" Dec 10 13:01:07 crc kubenswrapper[4689]: I1210 13:01:07.236817 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23b8c10a-d274-4592-9130-4aaface4bb52-utilities\") pod \"redhat-operators-v9lh5\" (UID: \"23b8c10a-d274-4592-9130-4aaface4bb52\") " pod="openshift-marketplace/redhat-operators-v9lh5" Dec 10 13:01:07 crc kubenswrapper[4689]: I1210 13:01:07.338381 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23b8c10a-d274-4592-9130-4aaface4bb52-utilities\") pod \"redhat-operators-v9lh5\" (UID: \"23b8c10a-d274-4592-9130-4aaface4bb52\") " pod="openshift-marketplace/redhat-operators-v9lh5" Dec 10 13:01:07 crc kubenswrapper[4689]: I1210 13:01:07.338713 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23b8c10a-d274-4592-9130-4aaface4bb52-catalog-content\") pod \"redhat-operators-v9lh5\" (UID: \"23b8c10a-d274-4592-9130-4aaface4bb52\") " pod="openshift-marketplace/redhat-operators-v9lh5" Dec 10 13:01:07 crc kubenswrapper[4689]: I1210 13:01:07.338772 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk85m\" (UniqueName: \"kubernetes.io/projected/23b8c10a-d274-4592-9130-4aaface4bb52-kube-api-access-fk85m\") pod \"redhat-operators-v9lh5\" (UID: \"23b8c10a-d274-4592-9130-4aaface4bb52\") " pod="openshift-marketplace/redhat-operators-v9lh5" Dec 10 13:01:07 crc kubenswrapper[4689]: I1210 13:01:07.339618 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23b8c10a-d274-4592-9130-4aaface4bb52-utilities\") pod \"redhat-operators-v9lh5\" (UID: \"23b8c10a-d274-4592-9130-4aaface4bb52\") " pod="openshift-marketplace/redhat-operators-v9lh5" Dec 10 13:01:07 crc kubenswrapper[4689]: I1210 13:01:07.339837 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23b8c10a-d274-4592-9130-4aaface4bb52-catalog-content\") pod \"redhat-operators-v9lh5\" (UID: \"23b8c10a-d274-4592-9130-4aaface4bb52\") " pod="openshift-marketplace/redhat-operators-v9lh5" Dec 10 13:01:07 crc kubenswrapper[4689]: I1210 13:01:07.367516 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk85m\" (UniqueName: \"kubernetes.io/projected/23b8c10a-d274-4592-9130-4aaface4bb52-kube-api-access-fk85m\") pod \"redhat-operators-v9lh5\" (UID: \"23b8c10a-d274-4592-9130-4aaface4bb52\") " pod="openshift-marketplace/redhat-operators-v9lh5" Dec 10 13:01:07 crc kubenswrapper[4689]: I1210 13:01:07.487393 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v9lh5" Dec 10 13:01:08 crc kubenswrapper[4689]: I1210 13:01:08.045657 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v9lh5"] Dec 10 13:01:08 crc kubenswrapper[4689]: I1210 13:01:08.145357 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9lh5" event={"ID":"23b8c10a-d274-4592-9130-4aaface4bb52","Type":"ContainerStarted","Data":"4a2e107f78fee39a8efba4f78b154e21fa2562d5ba261e9fdc1e1a21a18645b9"} Dec 10 13:01:08 crc kubenswrapper[4689]: I1210 13:01:08.148343 4689 generic.go:334] "Generic (PLEG): container finished" podID="a41ebdcd-910f-4669-992d-296e1a92162b" containerID="601abc48853dbe73a4d877b8a15f4ad5bda2a51a5faa8c0ccd6d9050ac250293" exitCode=0 Dec 10 13:01:08 crc kubenswrapper[4689]: I1210 13:01:08.148371 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" event={"ID":"a41ebdcd-910f-4669-992d-296e1a92162b","Type":"ContainerDied","Data":"601abc48853dbe73a4d877b8a15f4ad5bda2a51a5faa8c0ccd6d9050ac250293"} Dec 10 13:01:08 crc kubenswrapper[4689]: I1210 13:01:08.148399 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" event={"ID":"a41ebdcd-910f-4669-992d-296e1a92162b","Type":"ContainerStarted","Data":"36a2f5a3033c6dd9c99d03a29c704126695dab93feeffdac91eeb5d7cacf8826"} Dec 10 13:01:08 crc kubenswrapper[4689]: I1210 13:01:08.148428 4689 scope.go:117] "RemoveContainer" containerID="ce6f9dfd50adadcec59ac06d9a711336ee8675f152d7050ec5daf10842869d64" Dec 10 13:01:09 crc kubenswrapper[4689]: I1210 13:01:09.160027 4689 generic.go:334] "Generic (PLEG): container finished" podID="23b8c10a-d274-4592-9130-4aaface4bb52" containerID="5167d91092c52be17cc214ed8aa71d642396dadd09953f5fb619020964665958" exitCode=0 Dec 10 13:01:09 crc kubenswrapper[4689]: I1210 13:01:09.160136 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9lh5" event={"ID":"23b8c10a-d274-4592-9130-4aaface4bb52","Type":"ContainerDied","Data":"5167d91092c52be17cc214ed8aa71d642396dadd09953f5fb619020964665958"} Dec 10 13:01:09 crc kubenswrapper[4689]: I1210 13:01:09.162906 4689 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 13:01:11 crc kubenswrapper[4689]: I1210 13:01:11.195162 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9lh5" event={"ID":"23b8c10a-d274-4592-9130-4aaface4bb52","Type":"ContainerStarted","Data":"6d64d9b8591e915d5bf6dfbaa6ba27a3b2277ebd968895e5af919508effbcb7a"} Dec 10 13:01:12 crc kubenswrapper[4689]: I1210 13:01:12.210141 4689 generic.go:334] "Generic (PLEG): container finished" podID="23b8c10a-d274-4592-9130-4aaface4bb52" containerID="6d64d9b8591e915d5bf6dfbaa6ba27a3b2277ebd968895e5af919508effbcb7a" exitCode=0 Dec 10 13:01:12 crc kubenswrapper[4689]: I1210 13:01:12.210237 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9lh5" event={"ID":"23b8c10a-d274-4592-9130-4aaface4bb52","Type":"ContainerDied","Data":"6d64d9b8591e915d5bf6dfbaa6ba27a3b2277ebd968895e5af919508effbcb7a"} Dec 10 13:01:12 crc kubenswrapper[4689]: I1210 13:01:12.210468 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9lh5" event={"ID":"23b8c10a-d274-4592-9130-4aaface4bb52","Type":"ContainerStarted","Data":"610ee8e521c1f4bb716239314e135084a9bc6f89e17e3f163e6e5298bd13b789"} Dec 10 13:01:12 crc kubenswrapper[4689]: I1210 13:01:12.234348 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v9lh5" podStartSLOduration=2.5137625139999997 podStartE2EDuration="5.234330845s" podCreationTimestamp="2025-12-10 13:01:07 +0000 UTC" firstStartedPulling="2025-12-10 13:01:09.162601162 +0000 UTC m=+2736.950682300" lastFinishedPulling="2025-12-10 13:01:11.883169493 +0000 UTC m=+2739.671250631" observedRunningTime="2025-12-10 13:01:12.232767307 +0000 UTC m=+2740.020848455" watchObservedRunningTime="2025-12-10 13:01:12.234330845 +0000 UTC m=+2740.022411983" Dec 10 13:01:15 crc kubenswrapper[4689]: I1210 13:01:15.406691 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hrw9n"] Dec 10 13:01:15 crc kubenswrapper[4689]: I1210 13:01:15.411189 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrw9n" Dec 10 13:01:15 crc kubenswrapper[4689]: I1210 13:01:15.415733 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrw9n"] Dec 10 13:01:15 crc kubenswrapper[4689]: I1210 13:01:15.505128 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhs7l\" (UniqueName: \"kubernetes.io/projected/a55f0119-140b-461f-9968-9d2e3810b20e-kube-api-access-jhs7l\") pod \"redhat-marketplace-hrw9n\" (UID: \"a55f0119-140b-461f-9968-9d2e3810b20e\") " pod="openshift-marketplace/redhat-marketplace-hrw9n" Dec 10 13:01:15 crc kubenswrapper[4689]: I1210 13:01:15.505194 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a55f0119-140b-461f-9968-9d2e3810b20e-catalog-content\") pod \"redhat-marketplace-hrw9n\" (UID: \"a55f0119-140b-461f-9968-9d2e3810b20e\") " pod="openshift-marketplace/redhat-marketplace-hrw9n" Dec 10 13:01:15 crc kubenswrapper[4689]: I1210 13:01:15.505239 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a55f0119-140b-461f-9968-9d2e3810b20e-utilities\") pod \"redhat-marketplace-hrw9n\" (UID: \"a55f0119-140b-461f-9968-9d2e3810b20e\") " pod="openshift-marketplace/redhat-marketplace-hrw9n" Dec 10 13:01:15 crc kubenswrapper[4689]: I1210 13:01:15.607173 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhs7l\" (UniqueName: \"kubernetes.io/projected/a55f0119-140b-461f-9968-9d2e3810b20e-kube-api-access-jhs7l\") pod \"redhat-marketplace-hrw9n\" (UID: \"a55f0119-140b-461f-9968-9d2e3810b20e\") " pod="openshift-marketplace/redhat-marketplace-hrw9n" Dec 10 13:01:15 crc kubenswrapper[4689]: I1210 13:01:15.607296 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a55f0119-140b-461f-9968-9d2e3810b20e-catalog-content\") pod \"redhat-marketplace-hrw9n\" (UID: \"a55f0119-140b-461f-9968-9d2e3810b20e\") " pod="openshift-marketplace/redhat-marketplace-hrw9n" Dec 10 13:01:15 crc kubenswrapper[4689]: I1210 13:01:15.607358 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a55f0119-140b-461f-9968-9d2e3810b20e-utilities\") pod \"redhat-marketplace-hrw9n\" (UID: \"a55f0119-140b-461f-9968-9d2e3810b20e\") " pod="openshift-marketplace/redhat-marketplace-hrw9n" Dec 10 13:01:15 crc kubenswrapper[4689]: I1210 13:01:15.607945 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a55f0119-140b-461f-9968-9d2e3810b20e-catalog-content\") pod \"redhat-marketplace-hrw9n\" (UID: \"a55f0119-140b-461f-9968-9d2e3810b20e\") " pod="openshift-marketplace/redhat-marketplace-hrw9n" Dec 10 13:01:15 crc kubenswrapper[4689]: I1210 13:01:15.607981 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a55f0119-140b-461f-9968-9d2e3810b20e-utilities\") pod \"redhat-marketplace-hrw9n\" (UID: \"a55f0119-140b-461f-9968-9d2e3810b20e\") " pod="openshift-marketplace/redhat-marketplace-hrw9n" Dec 10 13:01:15 crc kubenswrapper[4689]: I1210 13:01:15.630788 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhs7l\" (UniqueName: \"kubernetes.io/projected/a55f0119-140b-461f-9968-9d2e3810b20e-kube-api-access-jhs7l\") pod \"redhat-marketplace-hrw9n\" (UID: \"a55f0119-140b-461f-9968-9d2e3810b20e\") " pod="openshift-marketplace/redhat-marketplace-hrw9n" Dec 10 13:01:15 crc kubenswrapper[4689]: I1210 13:01:15.742619 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrw9n" Dec 10 13:01:16 crc kubenswrapper[4689]: I1210 13:01:16.269046 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrw9n"] Dec 10 13:01:16 crc kubenswrapper[4689]: W1210 13:01:16.272395 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda55f0119_140b_461f_9968_9d2e3810b20e.slice/crio-1365b27ee27693e7edbdd49b6d6c56c391a3040fde3a7ce1b1e5eb4d50685530 WatchSource:0}: Error finding container 1365b27ee27693e7edbdd49b6d6c56c391a3040fde3a7ce1b1e5eb4d50685530: Status 404 returned error can't find the container with id 1365b27ee27693e7edbdd49b6d6c56c391a3040fde3a7ce1b1e5eb4d50685530 Dec 10 13:01:17 crc kubenswrapper[4689]: I1210 13:01:17.262959 4689 generic.go:334] "Generic (PLEG): container finished" podID="a55f0119-140b-461f-9968-9d2e3810b20e" containerID="e663ac135e5f73483ae03f92c4a01b1a231c2b8cc7c7ee16a66b1daa38ac5dba" exitCode=0 Dec 10 13:01:17 crc kubenswrapper[4689]: I1210 13:01:17.263142 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrw9n" event={"ID":"a55f0119-140b-461f-9968-9d2e3810b20e","Type":"ContainerDied","Data":"e663ac135e5f73483ae03f92c4a01b1a231c2b8cc7c7ee16a66b1daa38ac5dba"} Dec 10 13:01:17 crc kubenswrapper[4689]: I1210 13:01:17.263302 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrw9n" event={"ID":"a55f0119-140b-461f-9968-9d2e3810b20e","Type":"ContainerStarted","Data":"1365b27ee27693e7edbdd49b6d6c56c391a3040fde3a7ce1b1e5eb4d50685530"} Dec 10 13:01:17 crc kubenswrapper[4689]: I1210 13:01:17.487930 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v9lh5" Dec 10 13:01:17 crc kubenswrapper[4689]: I1210 13:01:17.488002 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v9lh5" Dec 10 13:01:17 crc kubenswrapper[4689]: I1210 13:01:17.539528 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v9lh5" Dec 10 13:01:18 crc kubenswrapper[4689]: I1210 13:01:18.312912 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v9lh5" Dec 10 13:01:19 crc kubenswrapper[4689]: I1210 13:01:19.771855 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v9lh5"] Dec 10 13:01:20 crc kubenswrapper[4689]: I1210 13:01:20.300708 4689 generic.go:334] "Generic (PLEG): container finished" podID="a55f0119-140b-461f-9968-9d2e3810b20e" containerID="0e6b49d22ccc9f8b1d5a18540824f2e2704f350bc84cef4e3ae76747d6c877ba" exitCode=0 Dec 10 13:01:20 crc kubenswrapper[4689]: I1210 13:01:20.301064 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v9lh5" podUID="23b8c10a-d274-4592-9130-4aaface4bb52" containerName="registry-server" containerID="cri-o://610ee8e521c1f4bb716239314e135084a9bc6f89e17e3f163e6e5298bd13b789" gracePeriod=2 Dec 10 13:01:20 crc kubenswrapper[4689]: I1210 13:01:20.301105 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrw9n" event={"ID":"a55f0119-140b-461f-9968-9d2e3810b20e","Type":"ContainerDied","Data":"0e6b49d22ccc9f8b1d5a18540824f2e2704f350bc84cef4e3ae76747d6c877ba"} Dec 10 13:01:20 crc kubenswrapper[4689]: I1210 13:01:20.843825 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v9lh5" Dec 10 13:01:20 crc kubenswrapper[4689]: I1210 13:01:20.911184 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk85m\" (UniqueName: \"kubernetes.io/projected/23b8c10a-d274-4592-9130-4aaface4bb52-kube-api-access-fk85m\") pod \"23b8c10a-d274-4592-9130-4aaface4bb52\" (UID: \"23b8c10a-d274-4592-9130-4aaface4bb52\") " Dec 10 13:01:20 crc kubenswrapper[4689]: I1210 13:01:20.911274 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23b8c10a-d274-4592-9130-4aaface4bb52-utilities\") pod \"23b8c10a-d274-4592-9130-4aaface4bb52\" (UID: \"23b8c10a-d274-4592-9130-4aaface4bb52\") " Dec 10 13:01:20 crc kubenswrapper[4689]: I1210 13:01:20.911456 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23b8c10a-d274-4592-9130-4aaface4bb52-catalog-content\") pod \"23b8c10a-d274-4592-9130-4aaface4bb52\" (UID: \"23b8c10a-d274-4592-9130-4aaface4bb52\") " Dec 10 13:01:20 crc kubenswrapper[4689]: I1210 13:01:20.917339 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23b8c10a-d274-4592-9130-4aaface4bb52-utilities" (OuterVolumeSpecName: "utilities") pod "23b8c10a-d274-4592-9130-4aaface4bb52" (UID: "23b8c10a-d274-4592-9130-4aaface4bb52"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 13:01:20 crc kubenswrapper[4689]: I1210 13:01:20.928438 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23b8c10a-d274-4592-9130-4aaface4bb52-kube-api-access-fk85m" (OuterVolumeSpecName: "kube-api-access-fk85m") pod "23b8c10a-d274-4592-9130-4aaface4bb52" (UID: "23b8c10a-d274-4592-9130-4aaface4bb52"). InnerVolumeSpecName "kube-api-access-fk85m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 13:01:21 crc kubenswrapper[4689]: I1210 13:01:21.014527 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fk85m\" (UniqueName: \"kubernetes.io/projected/23b8c10a-d274-4592-9130-4aaface4bb52-kube-api-access-fk85m\") on node \"crc\" DevicePath \"\"" Dec 10 13:01:21 crc kubenswrapper[4689]: I1210 13:01:21.014581 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23b8c10a-d274-4592-9130-4aaface4bb52-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 13:01:21 crc kubenswrapper[4689]: I1210 13:01:21.313270 4689 generic.go:334] "Generic (PLEG): container finished" podID="23b8c10a-d274-4592-9130-4aaface4bb52" containerID="610ee8e521c1f4bb716239314e135084a9bc6f89e17e3f163e6e5298bd13b789" exitCode=0 Dec 10 13:01:21 crc kubenswrapper[4689]: I1210 13:01:21.313397 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9lh5" event={"ID":"23b8c10a-d274-4592-9130-4aaface4bb52","Type":"ContainerDied","Data":"610ee8e521c1f4bb716239314e135084a9bc6f89e17e3f163e6e5298bd13b789"} Dec 10 13:01:21 crc kubenswrapper[4689]: I1210 13:01:21.313471 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v9lh5" Dec 10 13:01:21 crc kubenswrapper[4689]: I1210 13:01:21.313901 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9lh5" event={"ID":"23b8c10a-d274-4592-9130-4aaface4bb52","Type":"ContainerDied","Data":"4a2e107f78fee39a8efba4f78b154e21fa2562d5ba261e9fdc1e1a21a18645b9"} Dec 10 13:01:21 crc kubenswrapper[4689]: I1210 13:01:21.313917 4689 scope.go:117] "RemoveContainer" containerID="610ee8e521c1f4bb716239314e135084a9bc6f89e17e3f163e6e5298bd13b789" Dec 10 13:01:21 crc kubenswrapper[4689]: I1210 13:01:21.415315 4689 scope.go:117] "RemoveContainer" containerID="6d64d9b8591e915d5bf6dfbaa6ba27a3b2277ebd968895e5af919508effbcb7a" Dec 10 13:01:21 crc kubenswrapper[4689]: I1210 13:01:21.461716 4689 scope.go:117] "RemoveContainer" containerID="5167d91092c52be17cc214ed8aa71d642396dadd09953f5fb619020964665958" Dec 10 13:01:21 crc kubenswrapper[4689]: I1210 13:01:21.560076 4689 scope.go:117] "RemoveContainer" containerID="610ee8e521c1f4bb716239314e135084a9bc6f89e17e3f163e6e5298bd13b789" Dec 10 13:01:21 crc kubenswrapper[4689]: E1210 13:01:21.560561 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"610ee8e521c1f4bb716239314e135084a9bc6f89e17e3f163e6e5298bd13b789\": container with ID starting with 610ee8e521c1f4bb716239314e135084a9bc6f89e17e3f163e6e5298bd13b789 not found: ID does not exist" containerID="610ee8e521c1f4bb716239314e135084a9bc6f89e17e3f163e6e5298bd13b789" Dec 10 13:01:21 crc kubenswrapper[4689]: I1210 13:01:21.560599 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"610ee8e521c1f4bb716239314e135084a9bc6f89e17e3f163e6e5298bd13b789"} err="failed to get container status \"610ee8e521c1f4bb716239314e135084a9bc6f89e17e3f163e6e5298bd13b789\": rpc error: code = NotFound desc = could not find container \"610ee8e521c1f4bb716239314e135084a9bc6f89e17e3f163e6e5298bd13b789\": container with ID starting with 610ee8e521c1f4bb716239314e135084a9bc6f89e17e3f163e6e5298bd13b789 not found: ID does not exist" Dec 10 13:01:21 crc kubenswrapper[4689]: I1210 13:01:21.560624 4689 scope.go:117] "RemoveContainer" containerID="6d64d9b8591e915d5bf6dfbaa6ba27a3b2277ebd968895e5af919508effbcb7a" Dec 10 13:01:21 crc kubenswrapper[4689]: E1210 13:01:21.560945 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d64d9b8591e915d5bf6dfbaa6ba27a3b2277ebd968895e5af919508effbcb7a\": container with ID starting with 6d64d9b8591e915d5bf6dfbaa6ba27a3b2277ebd968895e5af919508effbcb7a not found: ID does not exist" containerID="6d64d9b8591e915d5bf6dfbaa6ba27a3b2277ebd968895e5af919508effbcb7a" Dec 10 13:01:21 crc kubenswrapper[4689]: I1210 13:01:21.560991 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d64d9b8591e915d5bf6dfbaa6ba27a3b2277ebd968895e5af919508effbcb7a"} err="failed to get container status \"6d64d9b8591e915d5bf6dfbaa6ba27a3b2277ebd968895e5af919508effbcb7a\": rpc error: code = NotFound desc = could not find container \"6d64d9b8591e915d5bf6dfbaa6ba27a3b2277ebd968895e5af919508effbcb7a\": container with ID starting with 6d64d9b8591e915d5bf6dfbaa6ba27a3b2277ebd968895e5af919508effbcb7a not found: ID does not exist" Dec 10 13:01:21 crc kubenswrapper[4689]: I1210 13:01:21.561010 4689 scope.go:117] "RemoveContainer" containerID="5167d91092c52be17cc214ed8aa71d642396dadd09953f5fb619020964665958" Dec 10 13:01:21 crc kubenswrapper[4689]: E1210 13:01:21.561233 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5167d91092c52be17cc214ed8aa71d642396dadd09953f5fb619020964665958\": container with ID starting with 5167d91092c52be17cc214ed8aa71d642396dadd09953f5fb619020964665958 not found: ID does not exist" containerID="5167d91092c52be17cc214ed8aa71d642396dadd09953f5fb619020964665958" Dec 10 13:01:21 crc kubenswrapper[4689]: I1210 13:01:21.561261 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5167d91092c52be17cc214ed8aa71d642396dadd09953f5fb619020964665958"} err="failed to get container status \"5167d91092c52be17cc214ed8aa71d642396dadd09953f5fb619020964665958\": rpc error: code = NotFound desc = could not find container \"5167d91092c52be17cc214ed8aa71d642396dadd09953f5fb619020964665958\": container with ID starting with 5167d91092c52be17cc214ed8aa71d642396dadd09953f5fb619020964665958 not found: ID does not exist" Dec 10 13:01:22 crc kubenswrapper[4689]: I1210 13:01:22.324989 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrw9n" event={"ID":"a55f0119-140b-461f-9968-9d2e3810b20e","Type":"ContainerStarted","Data":"e36511bc070c51ca87eb719a986b72656e3ca84eb6eceb14c8fad7a7b736ac12"} Dec 10 13:01:22 crc kubenswrapper[4689]: I1210 13:01:22.343502 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hrw9n" podStartSLOduration=3.194242822 podStartE2EDuration="7.343483731s" podCreationTimestamp="2025-12-10 13:01:15 +0000 UTC" firstStartedPulling="2025-12-10 13:01:17.264630521 +0000 UTC m=+2745.052711659" lastFinishedPulling="2025-12-10 13:01:21.41387144 +0000 UTC m=+2749.201952568" observedRunningTime="2025-12-10 13:01:22.339000681 +0000 UTC m=+2750.127081829" watchObservedRunningTime="2025-12-10 13:01:22.343483731 +0000 UTC m=+2750.131564869" Dec 10 13:01:23 crc kubenswrapper[4689]: I1210 13:01:23.608061 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23b8c10a-d274-4592-9130-4aaface4bb52-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23b8c10a-d274-4592-9130-4aaface4bb52" (UID: "23b8c10a-d274-4592-9130-4aaface4bb52"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 13:01:23 crc kubenswrapper[4689]: I1210 13:01:23.664810 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23b8c10a-d274-4592-9130-4aaface4bb52-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 13:01:23 crc kubenswrapper[4689]: I1210 13:01:23.746771 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v9lh5"] Dec 10 13:01:23 crc kubenswrapper[4689]: I1210 13:01:23.754059 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v9lh5"] Dec 10 13:01:24 crc kubenswrapper[4689]: I1210 13:01:24.513990 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23b8c10a-d274-4592-9130-4aaface4bb52" path="/var/lib/kubelet/pods/23b8c10a-d274-4592-9130-4aaface4bb52/volumes" Dec 10 13:01:25 crc kubenswrapper[4689]: I1210 13:01:25.746197 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hrw9n" Dec 10 13:01:25 crc kubenswrapper[4689]: I1210 13:01:25.746257 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hrw9n" Dec 10 13:01:25 crc kubenswrapper[4689]: I1210 13:01:25.806112 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hrw9n" Dec 10 13:01:26 crc kubenswrapper[4689]: I1210 13:01:26.422600 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hrw9n" Dec 10 13:01:26 crc kubenswrapper[4689]: I1210 13:01:26.974530 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrw9n"] Dec 10 13:01:28 crc kubenswrapper[4689]: I1210 13:01:28.391579 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hrw9n" podUID="a55f0119-140b-461f-9968-9d2e3810b20e" containerName="registry-server" containerID="cri-o://e36511bc070c51ca87eb719a986b72656e3ca84eb6eceb14c8fad7a7b736ac12" gracePeriod=2 Dec 10 13:01:28 crc kubenswrapper[4689]: I1210 13:01:28.895394 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrw9n" Dec 10 13:01:28 crc kubenswrapper[4689]: I1210 13:01:28.977009 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a55f0119-140b-461f-9968-9d2e3810b20e-catalog-content\") pod \"a55f0119-140b-461f-9968-9d2e3810b20e\" (UID: \"a55f0119-140b-461f-9968-9d2e3810b20e\") " Dec 10 13:01:28 crc kubenswrapper[4689]: I1210 13:01:28.977054 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhs7l\" (UniqueName: \"kubernetes.io/projected/a55f0119-140b-461f-9968-9d2e3810b20e-kube-api-access-jhs7l\") pod \"a55f0119-140b-461f-9968-9d2e3810b20e\" (UID: \"a55f0119-140b-461f-9968-9d2e3810b20e\") " Dec 10 13:01:28 crc kubenswrapper[4689]: I1210 13:01:28.977252 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a55f0119-140b-461f-9968-9d2e3810b20e-utilities\") pod \"a55f0119-140b-461f-9968-9d2e3810b20e\" (UID: \"a55f0119-140b-461f-9968-9d2e3810b20e\") " Dec 10 13:01:28 crc kubenswrapper[4689]: I1210 13:01:28.978157 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a55f0119-140b-461f-9968-9d2e3810b20e-utilities" (OuterVolumeSpecName: "utilities") pod "a55f0119-140b-461f-9968-9d2e3810b20e" (UID: "a55f0119-140b-461f-9968-9d2e3810b20e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 13:01:28 crc kubenswrapper[4689]: I1210 13:01:28.993236 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a55f0119-140b-461f-9968-9d2e3810b20e-kube-api-access-jhs7l" (OuterVolumeSpecName: "kube-api-access-jhs7l") pod "a55f0119-140b-461f-9968-9d2e3810b20e" (UID: "a55f0119-140b-461f-9968-9d2e3810b20e"). InnerVolumeSpecName "kube-api-access-jhs7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 13:01:28 crc kubenswrapper[4689]: I1210 13:01:28.997333 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a55f0119-140b-461f-9968-9d2e3810b20e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a55f0119-140b-461f-9968-9d2e3810b20e" (UID: "a55f0119-140b-461f-9968-9d2e3810b20e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 13:01:29 crc kubenswrapper[4689]: I1210 13:01:29.079868 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a55f0119-140b-461f-9968-9d2e3810b20e-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 13:01:29 crc kubenswrapper[4689]: I1210 13:01:29.079910 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a55f0119-140b-461f-9968-9d2e3810b20e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 13:01:29 crc kubenswrapper[4689]: I1210 13:01:29.079924 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhs7l\" (UniqueName: \"kubernetes.io/projected/a55f0119-140b-461f-9968-9d2e3810b20e-kube-api-access-jhs7l\") on node \"crc\" DevicePath \"\"" Dec 10 13:01:29 crc kubenswrapper[4689]: I1210 13:01:29.404166 4689 generic.go:334] "Generic (PLEG): container finished" podID="a55f0119-140b-461f-9968-9d2e3810b20e" containerID="e36511bc070c51ca87eb719a986b72656e3ca84eb6eceb14c8fad7a7b736ac12" exitCode=0 Dec 10 13:01:29 crc kubenswrapper[4689]: I1210 13:01:29.404250 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrw9n" Dec 10 13:01:29 crc kubenswrapper[4689]: I1210 13:01:29.404263 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrw9n" event={"ID":"a55f0119-140b-461f-9968-9d2e3810b20e","Type":"ContainerDied","Data":"e36511bc070c51ca87eb719a986b72656e3ca84eb6eceb14c8fad7a7b736ac12"} Dec 10 13:01:29 crc kubenswrapper[4689]: I1210 13:01:29.405327 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrw9n" event={"ID":"a55f0119-140b-461f-9968-9d2e3810b20e","Type":"ContainerDied","Data":"1365b27ee27693e7edbdd49b6d6c56c391a3040fde3a7ce1b1e5eb4d50685530"} Dec 10 13:01:29 crc kubenswrapper[4689]: I1210 13:01:29.405367 4689 scope.go:117] "RemoveContainer" containerID="e36511bc070c51ca87eb719a986b72656e3ca84eb6eceb14c8fad7a7b736ac12" Dec 10 13:01:29 crc kubenswrapper[4689]: I1210 13:01:29.431944 4689 scope.go:117] "RemoveContainer" containerID="0e6b49d22ccc9f8b1d5a18540824f2e2704f350bc84cef4e3ae76747d6c877ba" Dec 10 13:01:29 crc kubenswrapper[4689]: I1210 13:01:29.450431 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrw9n"] Dec 10 13:01:29 crc kubenswrapper[4689]: I1210 13:01:29.456671 4689 scope.go:117] "RemoveContainer" containerID="e663ac135e5f73483ae03f92c4a01b1a231c2b8cc7c7ee16a66b1daa38ac5dba" Dec 10 13:01:29 crc kubenswrapper[4689]: I1210 13:01:29.460830 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrw9n"] Dec 10 13:01:29 crc kubenswrapper[4689]: I1210 13:01:29.500060 4689 scope.go:117] "RemoveContainer" containerID="e36511bc070c51ca87eb719a986b72656e3ca84eb6eceb14c8fad7a7b736ac12" Dec 10 13:01:29 crc kubenswrapper[4689]: E1210 13:01:29.500482 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e36511bc070c51ca87eb719a986b72656e3ca84eb6eceb14c8fad7a7b736ac12\": container with ID starting with e36511bc070c51ca87eb719a986b72656e3ca84eb6eceb14c8fad7a7b736ac12 not found: ID does not exist" containerID="e36511bc070c51ca87eb719a986b72656e3ca84eb6eceb14c8fad7a7b736ac12" Dec 10 13:01:29 crc kubenswrapper[4689]: I1210 13:01:29.500515 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e36511bc070c51ca87eb719a986b72656e3ca84eb6eceb14c8fad7a7b736ac12"} err="failed to get container status \"e36511bc070c51ca87eb719a986b72656e3ca84eb6eceb14c8fad7a7b736ac12\": rpc error: code = NotFound desc = could not find container \"e36511bc070c51ca87eb719a986b72656e3ca84eb6eceb14c8fad7a7b736ac12\": container with ID starting with e36511bc070c51ca87eb719a986b72656e3ca84eb6eceb14c8fad7a7b736ac12 not found: ID does not exist" Dec 10 13:01:29 crc kubenswrapper[4689]: I1210 13:01:29.500553 4689 scope.go:117] "RemoveContainer" containerID="0e6b49d22ccc9f8b1d5a18540824f2e2704f350bc84cef4e3ae76747d6c877ba" Dec 10 13:01:29 crc kubenswrapper[4689]: E1210 13:01:29.500823 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e6b49d22ccc9f8b1d5a18540824f2e2704f350bc84cef4e3ae76747d6c877ba\": container with ID starting with 0e6b49d22ccc9f8b1d5a18540824f2e2704f350bc84cef4e3ae76747d6c877ba not found: ID does not exist" containerID="0e6b49d22ccc9f8b1d5a18540824f2e2704f350bc84cef4e3ae76747d6c877ba" Dec 10 13:01:29 crc kubenswrapper[4689]: I1210 13:01:29.500849 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e6b49d22ccc9f8b1d5a18540824f2e2704f350bc84cef4e3ae76747d6c877ba"} err="failed to get container status \"0e6b49d22ccc9f8b1d5a18540824f2e2704f350bc84cef4e3ae76747d6c877ba\": rpc error: code = NotFound desc = could not find container \"0e6b49d22ccc9f8b1d5a18540824f2e2704f350bc84cef4e3ae76747d6c877ba\": container with ID starting with 0e6b49d22ccc9f8b1d5a18540824f2e2704f350bc84cef4e3ae76747d6c877ba not found: ID does not exist" Dec 10 13:01:29 crc kubenswrapper[4689]: I1210 13:01:29.500867 4689 scope.go:117] "RemoveContainer" containerID="e663ac135e5f73483ae03f92c4a01b1a231c2b8cc7c7ee16a66b1daa38ac5dba" Dec 10 13:01:29 crc kubenswrapper[4689]: E1210 13:01:29.501220 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e663ac135e5f73483ae03f92c4a01b1a231c2b8cc7c7ee16a66b1daa38ac5dba\": container with ID starting with e663ac135e5f73483ae03f92c4a01b1a231c2b8cc7c7ee16a66b1daa38ac5dba not found: ID does not exist" containerID="e663ac135e5f73483ae03f92c4a01b1a231c2b8cc7c7ee16a66b1daa38ac5dba" Dec 10 13:01:29 crc kubenswrapper[4689]: I1210 13:01:29.501283 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e663ac135e5f73483ae03f92c4a01b1a231c2b8cc7c7ee16a66b1daa38ac5dba"} err="failed to get container status \"e663ac135e5f73483ae03f92c4a01b1a231c2b8cc7c7ee16a66b1daa38ac5dba\": rpc error: code = NotFound desc = could not find container \"e663ac135e5f73483ae03f92c4a01b1a231c2b8cc7c7ee16a66b1daa38ac5dba\": container with ID starting with e663ac135e5f73483ae03f92c4a01b1a231c2b8cc7c7ee16a66b1daa38ac5dba not found: ID does not exist" Dec 10 13:01:30 crc kubenswrapper[4689]: I1210 13:01:30.508073 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a55f0119-140b-461f-9968-9d2e3810b20e" path="/var/lib/kubelet/pods/a55f0119-140b-461f-9968-9d2e3810b20e/volumes" Dec 10 13:02:16 crc kubenswrapper[4689]: I1210 13:02:16.016157 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cgf6p"] Dec 10 13:02:16 crc kubenswrapper[4689]: E1210 13:02:16.017145 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b8c10a-d274-4592-9130-4aaface4bb52" containerName="registry-server" Dec 10 13:02:16 crc kubenswrapper[4689]: I1210 13:02:16.017160 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b8c10a-d274-4592-9130-4aaface4bb52" containerName="registry-server" Dec 10 13:02:16 crc kubenswrapper[4689]: E1210 13:02:16.017178 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b8c10a-d274-4592-9130-4aaface4bb52" containerName="extract-content" Dec 10 13:02:16 crc kubenswrapper[4689]: I1210 13:02:16.017184 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b8c10a-d274-4592-9130-4aaface4bb52" containerName="extract-content" Dec 10 13:02:16 crc kubenswrapper[4689]: E1210 13:02:16.017212 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b8c10a-d274-4592-9130-4aaface4bb52" containerName="extract-utilities" Dec 10 13:02:16 crc kubenswrapper[4689]: I1210 13:02:16.017218 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b8c10a-d274-4592-9130-4aaface4bb52" containerName="extract-utilities" Dec 10 13:02:16 crc kubenswrapper[4689]: E1210 13:02:16.017229 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a55f0119-140b-461f-9968-9d2e3810b20e" containerName="extract-utilities" Dec 10 13:02:16 crc kubenswrapper[4689]: I1210 13:02:16.017235 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="a55f0119-140b-461f-9968-9d2e3810b20e" containerName="extract-utilities" Dec 10 13:02:16 crc kubenswrapper[4689]: E1210 13:02:16.017249 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a55f0119-140b-461f-9968-9d2e3810b20e" containerName="registry-server" Dec 10 13:02:16 crc kubenswrapper[4689]: I1210 13:02:16.017254 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="a55f0119-140b-461f-9968-9d2e3810b20e" containerName="registry-server" Dec 10 13:02:16 crc kubenswrapper[4689]: E1210 13:02:16.017268 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a55f0119-140b-461f-9968-9d2e3810b20e" containerName="extract-content" Dec 10 13:02:16 crc kubenswrapper[4689]: I1210 13:02:16.017274 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="a55f0119-140b-461f-9968-9d2e3810b20e" containerName="extract-content" Dec 10 13:02:16 crc kubenswrapper[4689]: I1210 13:02:16.017453 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="23b8c10a-d274-4592-9130-4aaface4bb52" containerName="registry-server" Dec 10 13:02:16 crc kubenswrapper[4689]: I1210 13:02:16.017479 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="a55f0119-140b-461f-9968-9d2e3810b20e" containerName="registry-server" Dec 10 13:02:16 crc kubenswrapper[4689]: I1210 13:02:16.019461 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgf6p" Dec 10 13:02:16 crc kubenswrapper[4689]: I1210 13:02:16.033753 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cgf6p"] Dec 10 13:02:16 crc kubenswrapper[4689]: I1210 13:02:16.143503 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5608e31-6d7b-465e-bcac-30123aa2479b-utilities\") pod \"community-operators-cgf6p\" (UID: \"a5608e31-6d7b-465e-bcac-30123aa2479b\") " pod="openshift-marketplace/community-operators-cgf6p" Dec 10 13:02:16 crc kubenswrapper[4689]: I1210 13:02:16.143578 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5608e31-6d7b-465e-bcac-30123aa2479b-catalog-content\") pod \"community-operators-cgf6p\" (UID: \"a5608e31-6d7b-465e-bcac-30123aa2479b\") " pod="openshift-marketplace/community-operators-cgf6p" Dec 10 13:02:16 crc kubenswrapper[4689]: I1210 13:02:16.143712 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mzph\" (UniqueName: \"kubernetes.io/projected/a5608e31-6d7b-465e-bcac-30123aa2479b-kube-api-access-8mzph\") pod \"community-operators-cgf6p\" (UID: \"a5608e31-6d7b-465e-bcac-30123aa2479b\") " pod="openshift-marketplace/community-operators-cgf6p" Dec 10 13:02:16 crc kubenswrapper[4689]: I1210 13:02:16.247315 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5608e31-6d7b-465e-bcac-30123aa2479b-utilities\") pod \"community-operators-cgf6p\" (UID: \"a5608e31-6d7b-465e-bcac-30123aa2479b\") " pod="openshift-marketplace/community-operators-cgf6p" Dec 10 13:02:16 crc kubenswrapper[4689]: I1210 13:02:16.247453 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5608e31-6d7b-465e-bcac-30123aa2479b-catalog-content\") pod \"community-operators-cgf6p\" (UID: \"a5608e31-6d7b-465e-bcac-30123aa2479b\") " pod="openshift-marketplace/community-operators-cgf6p" Dec 10 13:02:16 crc kubenswrapper[4689]: I1210 13:02:16.247528 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mzph\" (UniqueName: \"kubernetes.io/projected/a5608e31-6d7b-465e-bcac-30123aa2479b-kube-api-access-8mzph\") pod \"community-operators-cgf6p\" (UID: \"a5608e31-6d7b-465e-bcac-30123aa2479b\") " pod="openshift-marketplace/community-operators-cgf6p" Dec 10 13:02:16 crc kubenswrapper[4689]: I1210 13:02:16.247815 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5608e31-6d7b-465e-bcac-30123aa2479b-utilities\") pod \"community-operators-cgf6p\" (UID: \"a5608e31-6d7b-465e-bcac-30123aa2479b\") " pod="openshift-marketplace/community-operators-cgf6p" Dec 10 13:02:16 crc kubenswrapper[4689]: I1210 13:02:16.247964 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5608e31-6d7b-465e-bcac-30123aa2479b-catalog-content\") pod \"community-operators-cgf6p\" (UID: \"a5608e31-6d7b-465e-bcac-30123aa2479b\") " pod="openshift-marketplace/community-operators-cgf6p" Dec 10 13:02:16 crc kubenswrapper[4689]: I1210 13:02:16.277315 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mzph\" (UniqueName: \"kubernetes.io/projected/a5608e31-6d7b-465e-bcac-30123aa2479b-kube-api-access-8mzph\") pod \"community-operators-cgf6p\" (UID: \"a5608e31-6d7b-465e-bcac-30123aa2479b\") " pod="openshift-marketplace/community-operators-cgf6p" Dec 10 13:02:16 crc kubenswrapper[4689]: I1210 13:02:16.343492 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgf6p" Dec 10 13:02:16 crc kubenswrapper[4689]: I1210 13:02:16.862111 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cgf6p"] Dec 10 13:02:16 crc kubenswrapper[4689]: W1210 13:02:16.869512 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5608e31_6d7b_465e_bcac_30123aa2479b.slice/crio-97928aa23ec1c71c3340707d464686fb0440d090619f71485889334f1d664f3c WatchSource:0}: Error finding container 97928aa23ec1c71c3340707d464686fb0440d090619f71485889334f1d664f3c: Status 404 returned error can't find the container with id 97928aa23ec1c71c3340707d464686fb0440d090619f71485889334f1d664f3c Dec 10 13:02:17 crc kubenswrapper[4689]: I1210 13:02:17.837826 4689 generic.go:334] "Generic (PLEG): container finished" podID="a5608e31-6d7b-465e-bcac-30123aa2479b" containerID="4d2b72008e909269cd15bffe2bed23a20d175dd7db1f13362345862e5ca0b86e" exitCode=0 Dec 10 13:02:17 crc kubenswrapper[4689]: I1210 13:02:17.838167 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgf6p" event={"ID":"a5608e31-6d7b-465e-bcac-30123aa2479b","Type":"ContainerDied","Data":"4d2b72008e909269cd15bffe2bed23a20d175dd7db1f13362345862e5ca0b86e"} Dec 10 13:02:17 crc kubenswrapper[4689]: I1210 13:02:17.838200 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgf6p" event={"ID":"a5608e31-6d7b-465e-bcac-30123aa2479b","Type":"ContainerStarted","Data":"97928aa23ec1c71c3340707d464686fb0440d090619f71485889334f1d664f3c"} Dec 10 13:02:21 crc kubenswrapper[4689]: I1210 13:02:21.879186 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgf6p" event={"ID":"a5608e31-6d7b-465e-bcac-30123aa2479b","Type":"ContainerStarted","Data":"eb432c287ddeabb309ea81dc204ceb521e73e8638fe09380516cc249a441bac7"} Dec 10 13:02:22 crc kubenswrapper[4689]: I1210 13:02:22.892193 4689 generic.go:334] "Generic (PLEG): container finished" podID="a5608e31-6d7b-465e-bcac-30123aa2479b" containerID="eb432c287ddeabb309ea81dc204ceb521e73e8638fe09380516cc249a441bac7" exitCode=0 Dec 10 13:02:22 crc kubenswrapper[4689]: I1210 13:02:22.892363 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgf6p" event={"ID":"a5608e31-6d7b-465e-bcac-30123aa2479b","Type":"ContainerDied","Data":"eb432c287ddeabb309ea81dc204ceb521e73e8638fe09380516cc249a441bac7"} Dec 10 13:02:23 crc kubenswrapper[4689]: I1210 13:02:23.908462 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgf6p" event={"ID":"a5608e31-6d7b-465e-bcac-30123aa2479b","Type":"ContainerStarted","Data":"8f9c2f0ac5159b20ee0210b949155ef842c1eb9045e9e36db575414431a284bf"} Dec 10 13:02:23 crc kubenswrapper[4689]: I1210 13:02:23.943482 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cgf6p" podStartSLOduration=3.111672684 podStartE2EDuration="8.9434643s" podCreationTimestamp="2025-12-10 13:02:15 +0000 UTC" firstStartedPulling="2025-12-10 13:02:17.840297134 +0000 UTC m=+2805.628378272" lastFinishedPulling="2025-12-10 13:02:23.67208875 +0000 UTC m=+2811.460169888" observedRunningTime="2025-12-10 13:02:23.938685424 +0000 UTC m=+2811.726766582" watchObservedRunningTime="2025-12-10 13:02:23.9434643 +0000 UTC m=+2811.731545438" Dec 10 13:02:26 crc kubenswrapper[4689]: I1210 13:02:26.344555 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cgf6p" Dec 10 13:02:26 crc kubenswrapper[4689]: I1210 13:02:26.344889 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cgf6p" Dec 10 13:02:26 crc kubenswrapper[4689]: I1210 13:02:26.418425 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cgf6p" Dec 10 13:02:36 crc kubenswrapper[4689]: I1210 13:02:36.404697 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cgf6p" Dec 10 13:02:36 crc kubenswrapper[4689]: I1210 13:02:36.472948 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cgf6p"] Dec 10 13:02:36 crc kubenswrapper[4689]: I1210 13:02:36.529536 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vj842"] Dec 10 13:02:36 crc kubenswrapper[4689]: I1210 13:02:36.530271 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vj842" podUID="f8b91596-d292-4ef0-bb0d-92cb3224c20c" containerName="registry-server" containerID="cri-o://21a0fdda967da9a3cf15082efdbad262b13b846d896afd19262c435ca7c05f48" gracePeriod=2 Dec 10 13:02:37 crc kubenswrapper[4689]: I1210 13:02:37.026412 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vj842" Dec 10 13:02:37 crc kubenswrapper[4689]: I1210 13:02:37.052086 4689 generic.go:334] "Generic (PLEG): container finished" podID="f8b91596-d292-4ef0-bb0d-92cb3224c20c" containerID="21a0fdda967da9a3cf15082efdbad262b13b846d896afd19262c435ca7c05f48" exitCode=0 Dec 10 13:02:37 crc kubenswrapper[4689]: I1210 13:02:37.052154 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vj842" Dec 10 13:02:37 crc kubenswrapper[4689]: I1210 13:02:37.052175 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vj842" event={"ID":"f8b91596-d292-4ef0-bb0d-92cb3224c20c","Type":"ContainerDied","Data":"21a0fdda967da9a3cf15082efdbad262b13b846d896afd19262c435ca7c05f48"} Dec 10 13:02:37 crc kubenswrapper[4689]: I1210 13:02:37.052250 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vj842" event={"ID":"f8b91596-d292-4ef0-bb0d-92cb3224c20c","Type":"ContainerDied","Data":"99c40ebed7ee8bfabc190ae69fa319a3822ad3970e6a5d0c6cd9c8232a645022"} Dec 10 13:02:37 crc kubenswrapper[4689]: I1210 13:02:37.052280 4689 scope.go:117] "RemoveContainer" containerID="21a0fdda967da9a3cf15082efdbad262b13b846d896afd19262c435ca7c05f48" Dec 10 13:02:37 crc kubenswrapper[4689]: I1210 13:02:37.078377 4689 scope.go:117] "RemoveContainer" containerID="5ea7ec0bfc8e1fedf72c70c71d215253a8c6a067b5a1f503f6a2c294ad5f071a" Dec 10 13:02:37 crc kubenswrapper[4689]: I1210 13:02:37.126268 4689 scope.go:117] "RemoveContainer" containerID="cb7303a09c051c88587b1a6250dd96a42347a625efba22489a66fb5bec949d7a" Dec 10 13:02:37 crc kubenswrapper[4689]: I1210 13:02:37.143959 4689 scope.go:117] "RemoveContainer" containerID="21a0fdda967da9a3cf15082efdbad262b13b846d896afd19262c435ca7c05f48" Dec 10 13:02:37 crc kubenswrapper[4689]: E1210 13:02:37.144387 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21a0fdda967da9a3cf15082efdbad262b13b846d896afd19262c435ca7c05f48\": container with ID starting with 21a0fdda967da9a3cf15082efdbad262b13b846d896afd19262c435ca7c05f48 not found: ID does not exist" containerID="21a0fdda967da9a3cf15082efdbad262b13b846d896afd19262c435ca7c05f48" Dec 10 13:02:37 crc kubenswrapper[4689]: I1210 13:02:37.144431 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21a0fdda967da9a3cf15082efdbad262b13b846d896afd19262c435ca7c05f48"} err="failed to get container status \"21a0fdda967da9a3cf15082efdbad262b13b846d896afd19262c435ca7c05f48\": rpc error: code = NotFound desc = could not find container \"21a0fdda967da9a3cf15082efdbad262b13b846d896afd19262c435ca7c05f48\": container with ID starting with 21a0fdda967da9a3cf15082efdbad262b13b846d896afd19262c435ca7c05f48 not found: ID does not exist" Dec 10 13:02:37 crc kubenswrapper[4689]: I1210 13:02:37.144458 4689 scope.go:117] "RemoveContainer" containerID="5ea7ec0bfc8e1fedf72c70c71d215253a8c6a067b5a1f503f6a2c294ad5f071a" Dec 10 13:02:37 crc kubenswrapper[4689]: E1210 13:02:37.144679 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ea7ec0bfc8e1fedf72c70c71d215253a8c6a067b5a1f503f6a2c294ad5f071a\": container with ID starting with 5ea7ec0bfc8e1fedf72c70c71d215253a8c6a067b5a1f503f6a2c294ad5f071a not found: ID does not exist" containerID="5ea7ec0bfc8e1fedf72c70c71d215253a8c6a067b5a1f503f6a2c294ad5f071a" Dec 10 13:02:37 crc kubenswrapper[4689]: I1210 13:02:37.144710 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ea7ec0bfc8e1fedf72c70c71d215253a8c6a067b5a1f503f6a2c294ad5f071a"} err="failed to get container status \"5ea7ec0bfc8e1fedf72c70c71d215253a8c6a067b5a1f503f6a2c294ad5f071a\": rpc error: code = NotFound desc = could not find container \"5ea7ec0bfc8e1fedf72c70c71d215253a8c6a067b5a1f503f6a2c294ad5f071a\": container with ID starting with 5ea7ec0bfc8e1fedf72c70c71d215253a8c6a067b5a1f503f6a2c294ad5f071a not found: ID does not exist" Dec 10 13:02:37 crc kubenswrapper[4689]: I1210 13:02:37.144730 4689 scope.go:117] "RemoveContainer" containerID="cb7303a09c051c88587b1a6250dd96a42347a625efba22489a66fb5bec949d7a" Dec 10 13:02:37 crc kubenswrapper[4689]: E1210 13:02:37.145015 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb7303a09c051c88587b1a6250dd96a42347a625efba22489a66fb5bec949d7a\": container with ID starting with cb7303a09c051c88587b1a6250dd96a42347a625efba22489a66fb5bec949d7a not found: ID does not exist" containerID="cb7303a09c051c88587b1a6250dd96a42347a625efba22489a66fb5bec949d7a" Dec 10 13:02:37 crc kubenswrapper[4689]: I1210 13:02:37.145056 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb7303a09c051c88587b1a6250dd96a42347a625efba22489a66fb5bec949d7a"} err="failed to get container status \"cb7303a09c051c88587b1a6250dd96a42347a625efba22489a66fb5bec949d7a\": rpc error: code = NotFound desc = could not find container \"cb7303a09c051c88587b1a6250dd96a42347a625efba22489a66fb5bec949d7a\": container with ID starting with cb7303a09c051c88587b1a6250dd96a42347a625efba22489a66fb5bec949d7a not found: ID does not exist" Dec 10 13:02:37 crc kubenswrapper[4689]: I1210 13:02:37.165932 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8b91596-d292-4ef0-bb0d-92cb3224c20c-utilities\") pod \"f8b91596-d292-4ef0-bb0d-92cb3224c20c\" (UID: \"f8b91596-d292-4ef0-bb0d-92cb3224c20c\") " Dec 10 13:02:37 crc kubenswrapper[4689]: I1210 13:02:37.166101 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9lvh\" (UniqueName: \"kubernetes.io/projected/f8b91596-d292-4ef0-bb0d-92cb3224c20c-kube-api-access-m9lvh\") pod \"f8b91596-d292-4ef0-bb0d-92cb3224c20c\" (UID: \"f8b91596-d292-4ef0-bb0d-92cb3224c20c\") " Dec 10 13:02:37 crc kubenswrapper[4689]: I1210 13:02:37.166182 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8b91596-d292-4ef0-bb0d-92cb3224c20c-catalog-content\") pod \"f8b91596-d292-4ef0-bb0d-92cb3224c20c\" (UID: \"f8b91596-d292-4ef0-bb0d-92cb3224c20c\") " Dec 10 13:02:37 crc kubenswrapper[4689]: I1210 13:02:37.166801 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8b91596-d292-4ef0-bb0d-92cb3224c20c-utilities" (OuterVolumeSpecName: "utilities") pod "f8b91596-d292-4ef0-bb0d-92cb3224c20c" (UID: "f8b91596-d292-4ef0-bb0d-92cb3224c20c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 13:02:37 crc kubenswrapper[4689]: I1210 13:02:37.171591 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8b91596-d292-4ef0-bb0d-92cb3224c20c-kube-api-access-m9lvh" (OuterVolumeSpecName: "kube-api-access-m9lvh") pod "f8b91596-d292-4ef0-bb0d-92cb3224c20c" (UID: "f8b91596-d292-4ef0-bb0d-92cb3224c20c"). InnerVolumeSpecName "kube-api-access-m9lvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 13:02:37 crc kubenswrapper[4689]: I1210 13:02:37.218304 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8b91596-d292-4ef0-bb0d-92cb3224c20c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8b91596-d292-4ef0-bb0d-92cb3224c20c" (UID: "f8b91596-d292-4ef0-bb0d-92cb3224c20c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 13:02:37 crc kubenswrapper[4689]: I1210 13:02:37.270461 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8b91596-d292-4ef0-bb0d-92cb3224c20c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 13:02:37 crc kubenswrapper[4689]: I1210 13:02:37.270498 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8b91596-d292-4ef0-bb0d-92cb3224c20c-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 13:02:37 crc kubenswrapper[4689]: I1210 13:02:37.270514 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9lvh\" (UniqueName: \"kubernetes.io/projected/f8b91596-d292-4ef0-bb0d-92cb3224c20c-kube-api-access-m9lvh\") on node \"crc\" DevicePath \"\"" Dec 10 13:02:37 crc kubenswrapper[4689]: I1210 13:02:37.390912 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vj842"] Dec 10 13:02:37 crc kubenswrapper[4689]: I1210 13:02:37.397723 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vj842"] Dec 10 13:02:38 crc kubenswrapper[4689]: I1210 13:02:38.544735 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8b91596-d292-4ef0-bb0d-92cb3224c20c" path="/var/lib/kubelet/pods/f8b91596-d292-4ef0-bb0d-92cb3224c20c/volumes" Dec 10 13:02:46 crc kubenswrapper[4689]: I1210 13:02:46.692080 4689 scope.go:117] "RemoveContainer" containerID="eba2d208d87b29797d28a0a4d1b9c2a66de577d6c22b16eccf829c2098470594" Dec 10 13:02:53 crc kubenswrapper[4689]: I1210 13:02:53.759046 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mhm9t"] Dec 10 13:02:53 crc kubenswrapper[4689]: E1210 13:02:53.760134 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b91596-d292-4ef0-bb0d-92cb3224c20c" containerName="extract-utilities" Dec 10 13:02:53 crc kubenswrapper[4689]: I1210 13:02:53.760157 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b91596-d292-4ef0-bb0d-92cb3224c20c" containerName="extract-utilities" Dec 10 13:02:53 crc kubenswrapper[4689]: E1210 13:02:53.760182 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b91596-d292-4ef0-bb0d-92cb3224c20c" containerName="registry-server" Dec 10 13:02:53 crc kubenswrapper[4689]: I1210 13:02:53.760193 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b91596-d292-4ef0-bb0d-92cb3224c20c" containerName="registry-server" Dec 10 13:02:53 crc kubenswrapper[4689]: E1210 13:02:53.760236 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b91596-d292-4ef0-bb0d-92cb3224c20c" containerName="extract-content" Dec 10 13:02:53 crc kubenswrapper[4689]: I1210 13:02:53.760247 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b91596-d292-4ef0-bb0d-92cb3224c20c" containerName="extract-content" Dec 10 13:02:53 crc kubenswrapper[4689]: I1210 13:02:53.760637 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8b91596-d292-4ef0-bb0d-92cb3224c20c" containerName="registry-server" Dec 10 13:02:53 crc kubenswrapper[4689]: I1210 13:02:53.762690 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhm9t" Dec 10 13:02:53 crc kubenswrapper[4689]: I1210 13:02:53.775991 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mhm9t"] Dec 10 13:02:53 crc kubenswrapper[4689]: I1210 13:02:53.860463 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzl4s\" (UniqueName: \"kubernetes.io/projected/63a79dcf-f262-4283-81b6-09a4de437e1b-kube-api-access-hzl4s\") pod \"certified-operators-mhm9t\" (UID: \"63a79dcf-f262-4283-81b6-09a4de437e1b\") " pod="openshift-marketplace/certified-operators-mhm9t" Dec 10 13:02:53 crc kubenswrapper[4689]: I1210 13:02:53.860550 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63a79dcf-f262-4283-81b6-09a4de437e1b-catalog-content\") pod \"certified-operators-mhm9t\" (UID: \"63a79dcf-f262-4283-81b6-09a4de437e1b\") " pod="openshift-marketplace/certified-operators-mhm9t" Dec 10 13:02:53 crc kubenswrapper[4689]: I1210 13:02:53.860587 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63a79dcf-f262-4283-81b6-09a4de437e1b-utilities\") pod \"certified-operators-mhm9t\" (UID: \"63a79dcf-f262-4283-81b6-09a4de437e1b\") " pod="openshift-marketplace/certified-operators-mhm9t" Dec 10 13:02:53 crc kubenswrapper[4689]: I1210 13:02:53.962956 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzl4s\" (UniqueName: \"kubernetes.io/projected/63a79dcf-f262-4283-81b6-09a4de437e1b-kube-api-access-hzl4s\") pod \"certified-operators-mhm9t\" (UID: \"63a79dcf-f262-4283-81b6-09a4de437e1b\") " pod="openshift-marketplace/certified-operators-mhm9t" Dec 10 13:02:53 crc kubenswrapper[4689]: I1210 13:02:53.963129 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63a79dcf-f262-4283-81b6-09a4de437e1b-catalog-content\") pod \"certified-operators-mhm9t\" (UID: \"63a79dcf-f262-4283-81b6-09a4de437e1b\") " pod="openshift-marketplace/certified-operators-mhm9t" Dec 10 13:02:53 crc kubenswrapper[4689]: I1210 13:02:53.963163 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63a79dcf-f262-4283-81b6-09a4de437e1b-utilities\") pod \"certified-operators-mhm9t\" (UID: \"63a79dcf-f262-4283-81b6-09a4de437e1b\") " pod="openshift-marketplace/certified-operators-mhm9t" Dec 10 13:02:53 crc kubenswrapper[4689]: I1210 13:02:53.963646 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63a79dcf-f262-4283-81b6-09a4de437e1b-catalog-content\") pod \"certified-operators-mhm9t\" (UID: \"63a79dcf-f262-4283-81b6-09a4de437e1b\") " pod="openshift-marketplace/certified-operators-mhm9t" Dec 10 13:02:53 crc kubenswrapper[4689]: I1210 13:02:53.963687 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63a79dcf-f262-4283-81b6-09a4de437e1b-utilities\") pod \"certified-operators-mhm9t\" (UID: \"63a79dcf-f262-4283-81b6-09a4de437e1b\") " pod="openshift-marketplace/certified-operators-mhm9t" Dec 10 13:02:53 crc kubenswrapper[4689]: I1210 13:02:53.979872 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzl4s\" (UniqueName: \"kubernetes.io/projected/63a79dcf-f262-4283-81b6-09a4de437e1b-kube-api-access-hzl4s\") pod \"certified-operators-mhm9t\" (UID: \"63a79dcf-f262-4283-81b6-09a4de437e1b\") " pod="openshift-marketplace/certified-operators-mhm9t" Dec 10 13:02:54 crc kubenswrapper[4689]: I1210 13:02:54.087187 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhm9t" Dec 10 13:02:54 crc kubenswrapper[4689]: I1210 13:02:54.658566 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mhm9t"] Dec 10 13:02:55 crc kubenswrapper[4689]: I1210 13:02:55.222415 4689 generic.go:334] "Generic (PLEG): container finished" podID="63a79dcf-f262-4283-81b6-09a4de437e1b" containerID="69a500a38f22a93dd92fadccddb77e2c5d2e830617c7fff38fa6c0b4153e7b64" exitCode=0 Dec 10 13:02:55 crc kubenswrapper[4689]: I1210 13:02:55.222654 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhm9t" event={"ID":"63a79dcf-f262-4283-81b6-09a4de437e1b","Type":"ContainerDied","Data":"69a500a38f22a93dd92fadccddb77e2c5d2e830617c7fff38fa6c0b4153e7b64"} Dec 10 13:02:55 crc kubenswrapper[4689]: I1210 13:02:55.222683 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhm9t" event={"ID":"63a79dcf-f262-4283-81b6-09a4de437e1b","Type":"ContainerStarted","Data":"233722f85f8a64e4e30e3d842d4bd0d2a395cb0c6c9b0dc56d58276fead12bc5"} Dec 10 13:02:56 crc kubenswrapper[4689]: I1210 13:02:56.232752 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhm9t" event={"ID":"63a79dcf-f262-4283-81b6-09a4de437e1b","Type":"ContainerStarted","Data":"0921d1eba466d34a8208cb26ef1adbdd649d5fdc34a53e0e89fcb5f06e066f7d"} Dec 10 13:02:57 crc kubenswrapper[4689]: I1210 13:02:57.244343 4689 generic.go:334] "Generic (PLEG): container finished" podID="63a79dcf-f262-4283-81b6-09a4de437e1b" containerID="0921d1eba466d34a8208cb26ef1adbdd649d5fdc34a53e0e89fcb5f06e066f7d" exitCode=0 Dec 10 13:02:57 crc kubenswrapper[4689]: I1210 13:02:57.244421 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhm9t" event={"ID":"63a79dcf-f262-4283-81b6-09a4de437e1b","Type":"ContainerDied","Data":"0921d1eba466d34a8208cb26ef1adbdd649d5fdc34a53e0e89fcb5f06e066f7d"} Dec 10 13:02:58 crc kubenswrapper[4689]: I1210 13:02:58.261597 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhm9t" event={"ID":"63a79dcf-f262-4283-81b6-09a4de437e1b","Type":"ContainerStarted","Data":"69f0aa9607d29d459aeb304958f5d18d1c8b65e1df410cd82868ea446e3422e9"} Dec 10 13:03:04 crc kubenswrapper[4689]: I1210 13:03:04.088022 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mhm9t" Dec 10 13:03:04 crc kubenswrapper[4689]: I1210 13:03:04.088597 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mhm9t" Dec 10 13:03:04 crc kubenswrapper[4689]: I1210 13:03:04.151693 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mhm9t" Dec 10 13:03:04 crc kubenswrapper[4689]: I1210 13:03:04.176563 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mhm9t" podStartSLOduration=8.609848952 podStartE2EDuration="11.176546945s" podCreationTimestamp="2025-12-10 13:02:53 +0000 UTC" firstStartedPulling="2025-12-10 13:02:55.224559265 +0000 UTC m=+2843.012640403" lastFinishedPulling="2025-12-10 13:02:57.791257258 +0000 UTC m=+2845.579338396" observedRunningTime="2025-12-10 13:02:58.28036637 +0000 UTC m=+2846.068447528" watchObservedRunningTime="2025-12-10 13:03:04.176546945 +0000 UTC m=+2851.964628073" Dec 10 13:03:04 crc kubenswrapper[4689]: I1210 13:03:04.390508 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mhm9t" Dec 10 13:03:04 crc kubenswrapper[4689]: I1210 13:03:04.441491 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mhm9t"] Dec 10 13:03:06 crc kubenswrapper[4689]: I1210 13:03:06.338708 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mhm9t" podUID="63a79dcf-f262-4283-81b6-09a4de437e1b" containerName="registry-server" containerID="cri-o://69f0aa9607d29d459aeb304958f5d18d1c8b65e1df410cd82868ea446e3422e9" gracePeriod=2 Dec 10 13:03:07 crc kubenswrapper[4689]: I1210 13:03:07.166227 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 13:03:07 crc kubenswrapper[4689]: I1210 13:03:07.166605 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 13:03:07 crc kubenswrapper[4689]: I1210 13:03:07.351775 4689 generic.go:334] "Generic (PLEG): container finished" podID="63a79dcf-f262-4283-81b6-09a4de437e1b" containerID="69f0aa9607d29d459aeb304958f5d18d1c8b65e1df410cd82868ea446e3422e9" exitCode=0 Dec 10 13:03:07 crc kubenswrapper[4689]: I1210 13:03:07.351823 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhm9t" event={"ID":"63a79dcf-f262-4283-81b6-09a4de437e1b","Type":"ContainerDied","Data":"69f0aa9607d29d459aeb304958f5d18d1c8b65e1df410cd82868ea446e3422e9"} Dec 10 13:03:07 crc kubenswrapper[4689]: I1210 13:03:07.959706 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhm9t" Dec 10 13:03:07 crc kubenswrapper[4689]: I1210 13:03:07.982373 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzl4s\" (UniqueName: \"kubernetes.io/projected/63a79dcf-f262-4283-81b6-09a4de437e1b-kube-api-access-hzl4s\") pod \"63a79dcf-f262-4283-81b6-09a4de437e1b\" (UID: \"63a79dcf-f262-4283-81b6-09a4de437e1b\") " Dec 10 13:03:07 crc kubenswrapper[4689]: I1210 13:03:07.982455 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63a79dcf-f262-4283-81b6-09a4de437e1b-utilities\") pod \"63a79dcf-f262-4283-81b6-09a4de437e1b\" (UID: \"63a79dcf-f262-4283-81b6-09a4de437e1b\") " Dec 10 13:03:07 crc kubenswrapper[4689]: I1210 13:03:07.982501 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63a79dcf-f262-4283-81b6-09a4de437e1b-catalog-content\") pod \"63a79dcf-f262-4283-81b6-09a4de437e1b\" (UID: \"63a79dcf-f262-4283-81b6-09a4de437e1b\") " Dec 10 13:03:07 crc kubenswrapper[4689]: I1210 13:03:07.984303 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63a79dcf-f262-4283-81b6-09a4de437e1b-utilities" (OuterVolumeSpecName: "utilities") pod "63a79dcf-f262-4283-81b6-09a4de437e1b" (UID: "63a79dcf-f262-4283-81b6-09a4de437e1b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 13:03:07 crc kubenswrapper[4689]: I1210 13:03:07.988887 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63a79dcf-f262-4283-81b6-09a4de437e1b-kube-api-access-hzl4s" (OuterVolumeSpecName: "kube-api-access-hzl4s") pod "63a79dcf-f262-4283-81b6-09a4de437e1b" (UID: "63a79dcf-f262-4283-81b6-09a4de437e1b"). InnerVolumeSpecName "kube-api-access-hzl4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 13:03:08 crc kubenswrapper[4689]: I1210 13:03:08.035871 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63a79dcf-f262-4283-81b6-09a4de437e1b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63a79dcf-f262-4283-81b6-09a4de437e1b" (UID: "63a79dcf-f262-4283-81b6-09a4de437e1b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 13:03:08 crc kubenswrapper[4689]: I1210 13:03:08.084323 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzl4s\" (UniqueName: \"kubernetes.io/projected/63a79dcf-f262-4283-81b6-09a4de437e1b-kube-api-access-hzl4s\") on node \"crc\" DevicePath \"\"" Dec 10 13:03:08 crc kubenswrapper[4689]: I1210 13:03:08.084354 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63a79dcf-f262-4283-81b6-09a4de437e1b-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 13:03:08 crc kubenswrapper[4689]: I1210 13:03:08.084366 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63a79dcf-f262-4283-81b6-09a4de437e1b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 13:03:08 crc kubenswrapper[4689]: I1210 13:03:08.365198 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhm9t" event={"ID":"63a79dcf-f262-4283-81b6-09a4de437e1b","Type":"ContainerDied","Data":"233722f85f8a64e4e30e3d842d4bd0d2a395cb0c6c9b0dc56d58276fead12bc5"} Dec 10 13:03:08 crc kubenswrapper[4689]: I1210 13:03:08.365328 4689 scope.go:117] "RemoveContainer" containerID="69f0aa9607d29d459aeb304958f5d18d1c8b65e1df410cd82868ea446e3422e9" Dec 10 13:03:08 crc kubenswrapper[4689]: I1210 13:03:08.365336 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhm9t" Dec 10 13:03:08 crc kubenswrapper[4689]: I1210 13:03:08.413209 4689 scope.go:117] "RemoveContainer" containerID="0921d1eba466d34a8208cb26ef1adbdd649d5fdc34a53e0e89fcb5f06e066f7d" Dec 10 13:03:08 crc kubenswrapper[4689]: I1210 13:03:08.427033 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mhm9t"] Dec 10 13:03:08 crc kubenswrapper[4689]: I1210 13:03:08.439398 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mhm9t"] Dec 10 13:03:08 crc kubenswrapper[4689]: I1210 13:03:08.482787 4689 scope.go:117] "RemoveContainer" containerID="69a500a38f22a93dd92fadccddb77e2c5d2e830617c7fff38fa6c0b4153e7b64" Dec 10 13:03:08 crc kubenswrapper[4689]: I1210 13:03:08.510394 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63a79dcf-f262-4283-81b6-09a4de437e1b" path="/var/lib/kubelet/pods/63a79dcf-f262-4283-81b6-09a4de437e1b/volumes" Dec 10 13:03:37 crc kubenswrapper[4689]: I1210 13:03:37.166899 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 13:03:37 crc kubenswrapper[4689]: I1210 13:03:37.167677 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 13:04:07 crc kubenswrapper[4689]: I1210 13:04:07.167187 4689 patch_prober.go:28] interesting pod/machine-config-daemon-db6zk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 13:04:07 crc kubenswrapper[4689]: I1210 13:04:07.168025 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 13:04:07 crc kubenswrapper[4689]: I1210 13:04:07.168121 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" Dec 10 13:04:07 crc kubenswrapper[4689]: I1210 13:04:07.169068 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"36a2f5a3033c6dd9c99d03a29c704126695dab93feeffdac91eeb5d7cacf8826"} pod="openshift-machine-config-operator/machine-config-daemon-db6zk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 13:04:07 crc kubenswrapper[4689]: I1210 13:04:07.169151 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" containerName="machine-config-daemon" containerID="cri-o://36a2f5a3033c6dd9c99d03a29c704126695dab93feeffdac91eeb5d7cacf8826" gracePeriod=600 Dec 10 13:04:07 crc kubenswrapper[4689]: E1210 13:04:07.295868 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 13:04:07 crc kubenswrapper[4689]: I1210 13:04:07.963244 4689 generic.go:334] "Generic (PLEG): container finished" podID="a41ebdcd-910f-4669-992d-296e1a92162b" containerID="36a2f5a3033c6dd9c99d03a29c704126695dab93feeffdac91eeb5d7cacf8826" exitCode=0 Dec 10 13:04:07 crc kubenswrapper[4689]: I1210 13:04:07.963335 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" event={"ID":"a41ebdcd-910f-4669-992d-296e1a92162b","Type":"ContainerDied","Data":"36a2f5a3033c6dd9c99d03a29c704126695dab93feeffdac91eeb5d7cacf8826"} Dec 10 13:04:07 crc kubenswrapper[4689]: I1210 13:04:07.963424 4689 scope.go:117] "RemoveContainer" containerID="601abc48853dbe73a4d877b8a15f4ad5bda2a51a5faa8c0ccd6d9050ac250293" Dec 10 13:04:07 crc kubenswrapper[4689]: I1210 13:04:07.964255 4689 scope.go:117] "RemoveContainer" containerID="36a2f5a3033c6dd9c99d03a29c704126695dab93feeffdac91eeb5d7cacf8826" Dec 10 13:04:07 crc kubenswrapper[4689]: E1210 13:04:07.964617 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 13:04:21 crc kubenswrapper[4689]: I1210 13:04:21.498991 4689 scope.go:117] "RemoveContainer" containerID="36a2f5a3033c6dd9c99d03a29c704126695dab93feeffdac91eeb5d7cacf8826" Dec 10 13:04:21 crc kubenswrapper[4689]: E1210 13:04:21.499690 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 13:04:35 crc kubenswrapper[4689]: I1210 13:04:35.498007 4689 scope.go:117] "RemoveContainer" containerID="36a2f5a3033c6dd9c99d03a29c704126695dab93feeffdac91eeb5d7cacf8826" Dec 10 13:04:35 crc kubenswrapper[4689]: E1210 13:04:35.498721 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b" Dec 10 13:04:50 crc kubenswrapper[4689]: I1210 13:04:50.498411 4689 scope.go:117] "RemoveContainer" containerID="36a2f5a3033c6dd9c99d03a29c704126695dab93feeffdac91eeb5d7cacf8826" Dec 10 13:04:50 crc kubenswrapper[4689]: E1210 13:04:50.499328 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-db6zk_openshift-machine-config-operator(a41ebdcd-910f-4669-992d-296e1a92162b)\"" pod="openshift-machine-config-operator/machine-config-daemon-db6zk" podUID="a41ebdcd-910f-4669-992d-296e1a92162b"